00:00:00.001 Started by upstream project "autotest-per-patch" build number 131284 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.126 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:10.252 The recommended git tool is: git 00:00:10.252 using credential 00000000-0000-0000-0000-000000000002 00:00:10.255 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:10.269 Fetching changes from the remote Git repository 00:00:10.271 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:10.285 Using shallow fetch with depth 1 00:00:10.285 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:10.285 > git --version # timeout=10 00:00:10.298 > git --version # 'git version 2.39.2' 00:00:10.298 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:10.313 Setting http proxy: proxy-dmz.intel.com:911 00:00:10.313 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:15.961 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:15.978 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:15.991 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:15.991 > git config core.sparsecheckout # timeout=10 00:00:16.007 > git read-tree -mu HEAD # timeout=10 00:00:16.029 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:16.054 Commit message: "packer: Fix typo in a package name" 00:00:16.054 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:16.172 [Pipeline] Start of Pipeline 00:00:16.184 [Pipeline] library 00:00:16.186 Loading library shm_lib@master 00:00:16.186 Library shm_lib@master is cached. Copying from home. 00:00:16.201 [Pipeline] node 00:00:31.203 Still waiting to schedule task 00:00:31.203 Waiting for next available executor on ‘vagrant-vm-host’ 00:12:23.268 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest_3 00:12:23.269 [Pipeline] { 00:12:23.282 [Pipeline] catchError 00:12:23.285 [Pipeline] { 00:12:23.300 [Pipeline] wrap 00:12:23.310 [Pipeline] { 00:12:23.318 [Pipeline] stage 00:12:23.320 [Pipeline] { (Prologue) 00:12:23.336 [Pipeline] echo 00:12:23.338 Node: VM-host-WFP1 00:12:23.344 [Pipeline] cleanWs 00:12:23.352 [WS-CLEANUP] Deleting project workspace... 00:12:23.352 [WS-CLEANUP] Deferred wipeout is used... 00:12:23.357 [WS-CLEANUP] done 00:12:23.546 [Pipeline] setCustomBuildProperty 00:12:23.631 [Pipeline] httpRequest 00:12:24.005 [Pipeline] echo 00:12:24.007 Sorcerer 10.211.164.101 is alive 00:12:24.017 [Pipeline] retry 00:12:24.020 [Pipeline] { 00:12:24.035 [Pipeline] httpRequest 00:12:24.040 HttpMethod: GET 00:12:24.041 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:12:24.042 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:12:24.043 Response Code: HTTP/1.1 200 OK 00:12:24.043 Success: Status code 200 is in the accepted range: 200,404 00:12:24.045 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:12:24.189 [Pipeline] } 00:12:24.207 [Pipeline] // retry 00:12:24.215 [Pipeline] sh 00:12:24.497 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:12:24.513 [Pipeline] httpRequest 00:12:24.907 [Pipeline] echo 00:12:24.909 Sorcerer 10.211.164.101 is alive 00:12:24.920 [Pipeline] retry 00:12:24.923 [Pipeline] { 00:12:24.940 [Pipeline] httpRequest 00:12:24.966 HttpMethod: GET 00:12:24.967 URL: http://10.211.164.101/packages/spdk_c1dd46fc6e67a0e1bf6bf54e7835eb422b77a45f.tar.gz 00:12:24.967 Sending request to url: http://10.211.164.101/packages/spdk_c1dd46fc6e67a0e1bf6bf54e7835eb422b77a45f.tar.gz 00:12:24.968 Response Code: HTTP/1.1 200 OK 00:12:24.969 Success: Status code 200 is in the accepted range: 200,404 00:12:24.969 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_c1dd46fc6e67a0e1bf6bf54e7835eb422b77a45f.tar.gz 00:12:27.240 [Pipeline] } 00:12:27.257 [Pipeline] // retry 00:12:27.267 [Pipeline] sh 00:12:27.550 + tar --no-same-owner -xf spdk_c1dd46fc6e67a0e1bf6bf54e7835eb422b77a45f.tar.gz 00:12:30.091 [Pipeline] sh 00:12:30.372 + git -C spdk log --oneline -n5 00:12:30.372 c1dd46fc6 config: add SPDK_CONFIG_MAX_NUMA_NODES 00:12:30.372 38f302a8c thread: convert iobuf nodes to 1-sized arrays 00:12:30.372 c982d92d9 thread: add helper functions for init/free of iobuf_node 00:12:30.372 fd66efb5c thread: add struct iobuf_channel_node 00:12:30.373 f6a2477dd thread: add struct iobuf_node 00:12:30.392 [Pipeline] writeFile 00:12:30.408 [Pipeline] sh 00:12:30.694 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:12:30.706 [Pipeline] sh 00:12:31.044 + cat autorun-spdk.conf 00:12:31.044 SPDK_RUN_FUNCTIONAL_TEST=1 00:12:31.044 SPDK_TEST_NVME=1 00:12:31.044 SPDK_TEST_FTL=1 00:12:31.044 SPDK_TEST_ISAL=1 00:12:31.044 SPDK_RUN_ASAN=1 00:12:31.044 SPDK_RUN_UBSAN=1 00:12:31.044 SPDK_TEST_XNVME=1 00:12:31.044 SPDK_TEST_NVME_FDP=1 00:12:31.044 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:31.051 RUN_NIGHTLY=0 00:12:31.053 [Pipeline] } 00:12:31.067 [Pipeline] // stage 00:12:31.082 [Pipeline] stage 00:12:31.084 [Pipeline] { (Run VM) 00:12:31.097 [Pipeline] sh 00:12:31.376 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:12:31.376 + echo 'Start stage prepare_nvme.sh' 00:12:31.376 Start stage prepare_nvme.sh 00:12:31.376 + [[ -n 0 ]] 00:12:31.376 + disk_prefix=ex0 00:12:31.376 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:12:31.376 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:12:31.376 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:12:31.376 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:31.376 ++ SPDK_TEST_NVME=1 00:12:31.376 ++ SPDK_TEST_FTL=1 00:12:31.376 ++ SPDK_TEST_ISAL=1 00:12:31.376 ++ SPDK_RUN_ASAN=1 00:12:31.376 ++ SPDK_RUN_UBSAN=1 00:12:31.376 ++ SPDK_TEST_XNVME=1 00:12:31.376 ++ SPDK_TEST_NVME_FDP=1 00:12:31.376 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:31.376 ++ RUN_NIGHTLY=0 00:12:31.376 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:12:31.376 + nvme_files=() 00:12:31.376 + declare -A nvme_files 00:12:31.376 + backend_dir=/var/lib/libvirt/images/backends 00:12:31.376 + nvme_files['nvme.img']=5G 00:12:31.376 + nvme_files['nvme-cmb.img']=5G 00:12:31.376 + nvme_files['nvme-multi0.img']=4G 00:12:31.376 + nvme_files['nvme-multi1.img']=4G 00:12:31.376 + nvme_files['nvme-multi2.img']=4G 00:12:31.376 + nvme_files['nvme-openstack.img']=8G 00:12:31.376 + nvme_files['nvme-zns.img']=5G 00:12:31.376 + (( SPDK_TEST_NVME_PMR == 1 )) 00:12:31.376 + (( SPDK_TEST_FTL == 1 )) 00:12:31.376 + nvme_files["nvme-ftl.img"]=6G 00:12:31.376 + (( SPDK_TEST_NVME_FDP == 1 )) 00:12:31.376 + nvme_files["nvme-fdp.img"]=1G 00:12:31.376 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:12:31.376 + for nvme in "${!nvme_files[@]}" 00:12:31.376 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:12:31.376 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:12:31.376 + for nvme in "${!nvme_files[@]}" 00:12:31.376 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:12:31.376 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:12:31.376 + for nvme in "${!nvme_files[@]}" 00:12:31.376 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:12:31.376 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:12:31.377 + for nvme in "${!nvme_files[@]}" 00:12:31.377 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:12:31.377 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:12:31.377 + for nvme in "${!nvme_files[@]}" 00:12:31.377 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:12:31.377 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:12:31.377 + for nvme in "${!nvme_files[@]}" 00:12:31.377 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:12:31.377 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:12:31.377 + for nvme in "${!nvme_files[@]}" 00:12:31.377 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:12:31.635 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:12:31.635 + for nvme in "${!nvme_files[@]}" 00:12:31.635 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:12:31.635 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:12:31.635 + for nvme in "${!nvme_files[@]}" 00:12:31.635 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:12:31.635 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:12:31.635 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:12:31.635 + echo 'End stage prepare_nvme.sh' 00:12:31.635 End stage prepare_nvme.sh 00:12:31.647 [Pipeline] sh 00:12:31.929 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:12:31.929 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:12:31.929 00:12:31.929 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:12:31.929 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:12:31.929 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:12:31.929 HELP=0 00:12:31.929 DRY_RUN=0 00:12:31.929 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:12:31.929 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:12:31.929 NVME_AUTO_CREATE=0 00:12:31.929 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:12:31.929 NVME_CMB=,,,, 00:12:31.929 NVME_PMR=,,,, 00:12:31.929 NVME_ZNS=,,,, 00:12:31.929 NVME_MS=true,,,, 00:12:31.929 NVME_FDP=,,,on, 00:12:31.929 SPDK_VAGRANT_DISTRO=fedora39 00:12:31.929 SPDK_VAGRANT_VMCPU=10 00:12:31.929 SPDK_VAGRANT_VMRAM=12288 00:12:31.929 SPDK_VAGRANT_PROVIDER=libvirt 00:12:31.929 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:12:31.929 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:12:31.929 SPDK_OPENSTACK_NETWORK=0 00:12:31.929 VAGRANT_PACKAGE_BOX=0 00:12:31.929 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:12:31.929 FORCE_DISTRO=true 00:12:31.929 VAGRANT_BOX_VERSION= 00:12:31.929 EXTRA_VAGRANTFILES= 00:12:31.929 NIC_MODEL=e1000 00:12:31.929 00:12:31.929 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt' 00:12:31.929 /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:12:35.211 Bringing machine 'default' up with 'libvirt' provider... 00:12:36.145 ==> default: Creating image (snapshot of base box volume). 00:12:36.404 ==> default: Creating domain with the following settings... 00:12:36.404 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1729182431_8d800363a7faf87a46a0 00:12:36.404 ==> default: -- Domain type: kvm 00:12:36.404 ==> default: -- Cpus: 10 00:12:36.404 ==> default: -- Feature: acpi 00:12:36.404 ==> default: -- Feature: apic 00:12:36.404 ==> default: -- Feature: pae 00:12:36.404 ==> default: -- Memory: 12288M 00:12:36.404 ==> default: -- Memory Backing: hugepages: 00:12:36.404 ==> default: -- Management MAC: 00:12:36.404 ==> default: -- Loader: 00:12:36.404 ==> default: -- Nvram: 00:12:36.404 ==> default: -- Base box: spdk/fedora39 00:12:36.404 ==> default: -- Storage pool: default 00:12:36.404 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1729182431_8d800363a7faf87a46a0.img (20G) 00:12:36.404 ==> default: -- Volume Cache: default 00:12:36.404 ==> default: -- Kernel: 00:12:36.404 ==> default: -- Initrd: 00:12:36.404 ==> default: -- Graphics Type: vnc 00:12:36.404 ==> default: -- Graphics Port: -1 00:12:36.404 ==> default: -- Graphics IP: 127.0.0.1 00:12:36.404 ==> default: -- Graphics Password: Not defined 00:12:36.404 ==> default: -- Video Type: cirrus 00:12:36.404 ==> default: -- Video VRAM: 9216 00:12:36.404 ==> default: -- Sound Type: 00:12:36.404 ==> default: -- Keymap: en-us 00:12:36.404 ==> default: -- TPM Path: 00:12:36.404 ==> default: -- INPUT: type=mouse, bus=ps2 00:12:36.404 ==> default: -- Command line args: 00:12:36.404 ==> default: -> value=-device, 00:12:36.404 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:12:36.404 ==> default: -> value=-drive, 00:12:36.404 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:12:36.404 ==> default: -> value=-device, 00:12:36.404 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:12:36.404 ==> default: -> value=-device, 00:12:36.404 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:12:36.404 ==> default: -> value=-drive, 00:12:36.404 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:12:36.404 ==> default: -> value=-device, 00:12:36.404 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:36.404 ==> default: -> value=-device, 00:12:36.404 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:12:36.404 ==> default: -> value=-drive, 00:12:36.404 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:12:36.404 ==> default: -> value=-device, 00:12:36.404 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:36.404 ==> default: -> value=-drive, 00:12:36.404 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:12:36.404 ==> default: -> value=-device, 00:12:36.404 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:36.404 ==> default: -> value=-drive, 00:12:36.404 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:12:36.404 ==> default: -> value=-device, 00:12:36.404 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:36.404 ==> default: -> value=-device, 00:12:36.404 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:12:36.404 ==> default: -> value=-device, 00:12:36.404 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:12:36.404 ==> default: -> value=-drive, 00:12:36.404 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:12:36.404 ==> default: -> value=-device, 00:12:36.404 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:12:36.971 ==> default: Creating shared folders metadata... 00:12:36.971 ==> default: Starting domain. 00:12:38.878 ==> default: Waiting for domain to get an IP address... 00:12:57.052 ==> default: Waiting for SSH to become available... 00:12:57.052 ==> default: Configuring and enabling network interfaces... 00:13:02.324 default: SSH address: 192.168.121.88:22 00:13:02.324 default: SSH username: vagrant 00:13:02.324 default: SSH auth method: private key 00:13:05.610 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:13:13.725 ==> default: Mounting SSHFS shared folder... 00:13:16.294 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:13:16.294 ==> default: Checking Mount.. 00:13:17.708 ==> default: Folder Successfully Mounted! 00:13:17.708 ==> default: Running provisioner: file... 00:13:19.082 default: ~/.gitconfig => .gitconfig 00:13:19.339 00:13:19.339 SUCCESS! 00:13:19.339 00:13:19.339 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:13:19.339 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:13:19.339 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:13:19.339 00:13:19.348 [Pipeline] } 00:13:19.366 [Pipeline] // stage 00:13:19.375 [Pipeline] dir 00:13:19.376 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt 00:13:19.377 [Pipeline] { 00:13:19.390 [Pipeline] catchError 00:13:19.392 [Pipeline] { 00:13:19.405 [Pipeline] sh 00:13:19.717 + vagrant ssh-config --host vagrant 00:13:19.717 + sed -ne /^Host/,$p 00:13:19.717 + tee ssh_conf 00:13:23.006 Host vagrant 00:13:23.006 HostName 192.168.121.88 00:13:23.006 User vagrant 00:13:23.006 Port 22 00:13:23.006 UserKnownHostsFile /dev/null 00:13:23.006 StrictHostKeyChecking no 00:13:23.006 PasswordAuthentication no 00:13:23.006 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:13:23.006 IdentitiesOnly yes 00:13:23.006 LogLevel FATAL 00:13:23.006 ForwardAgent yes 00:13:23.006 ForwardX11 yes 00:13:23.006 00:13:23.020 [Pipeline] withEnv 00:13:23.023 [Pipeline] { 00:13:23.037 [Pipeline] sh 00:13:23.319 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:13:23.319 source /etc/os-release 00:13:23.319 [[ -e /image.version ]] && img=$(< /image.version) 00:13:23.319 # Minimal, systemd-like check. 00:13:23.319 if [[ -e /.dockerenv ]]; then 00:13:23.319 # Clear garbage from the node's name: 00:13:23.319 # agt-er_autotest_547-896 -> autotest_547-896 00:13:23.319 # $HOSTNAME is the actual container id 00:13:23.319 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:13:23.319 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:13:23.319 # We can assume this is a mount from a host where container is running, 00:13:23.319 # so fetch its hostname to easily identify the target swarm worker. 00:13:23.319 container="$(< /etc/hostname) ($agent)" 00:13:23.319 else 00:13:23.319 # Fallback 00:13:23.319 container=$agent 00:13:23.319 fi 00:13:23.319 fi 00:13:23.319 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:13:23.319 00:13:23.591 [Pipeline] } 00:13:23.607 [Pipeline] // withEnv 00:13:23.616 [Pipeline] setCustomBuildProperty 00:13:23.631 [Pipeline] stage 00:13:23.634 [Pipeline] { (Tests) 00:13:23.651 [Pipeline] sh 00:13:23.979 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:13:24.251 [Pipeline] sh 00:13:24.533 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:13:24.807 [Pipeline] timeout 00:13:24.808 Timeout set to expire in 50 min 00:13:24.809 [Pipeline] { 00:13:24.823 [Pipeline] sh 00:13:25.105 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:13:25.672 HEAD is now at c1dd46fc6 config: add SPDK_CONFIG_MAX_NUMA_NODES 00:13:25.684 [Pipeline] sh 00:13:25.966 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:13:26.240 [Pipeline] sh 00:13:26.583 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:13:26.858 [Pipeline] sh 00:13:27.140 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:13:27.399 ++ readlink -f spdk_repo 00:13:27.399 + DIR_ROOT=/home/vagrant/spdk_repo 00:13:27.399 + [[ -n /home/vagrant/spdk_repo ]] 00:13:27.399 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:13:27.399 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:13:27.399 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:13:27.399 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:13:27.399 + [[ -d /home/vagrant/spdk_repo/output ]] 00:13:27.399 + [[ nvme-vg-autotest == pkgdep-* ]] 00:13:27.399 + cd /home/vagrant/spdk_repo 00:13:27.399 + source /etc/os-release 00:13:27.399 ++ NAME='Fedora Linux' 00:13:27.399 ++ VERSION='39 (Cloud Edition)' 00:13:27.399 ++ ID=fedora 00:13:27.399 ++ VERSION_ID=39 00:13:27.399 ++ VERSION_CODENAME= 00:13:27.399 ++ PLATFORM_ID=platform:f39 00:13:27.399 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:13:27.399 ++ ANSI_COLOR='0;38;2;60;110;180' 00:13:27.399 ++ LOGO=fedora-logo-icon 00:13:27.400 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:13:27.400 ++ HOME_URL=https://fedoraproject.org/ 00:13:27.400 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:13:27.400 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:13:27.400 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:13:27.400 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:13:27.400 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:13:27.400 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:13:27.400 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:13:27.400 ++ SUPPORT_END=2024-11-12 00:13:27.400 ++ VARIANT='Cloud Edition' 00:13:27.400 ++ VARIANT_ID=cloud 00:13:27.400 + uname -a 00:13:27.400 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:13:27.400 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:13:27.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:28.225 Hugepages 00:13:28.225 node hugesize free / total 00:13:28.225 node0 1048576kB 0 / 0 00:13:28.225 node0 2048kB 0 / 0 00:13:28.225 00:13:28.225 Type BDF Vendor Device NUMA Driver Device Block devices 00:13:28.225 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:13:28.225 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:13:28.225 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:13:28.225 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:13:28.225 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:13:28.484 + rm -f /tmp/spdk-ld-path 00:13:28.484 + source autorun-spdk.conf 00:13:28.484 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:28.484 ++ SPDK_TEST_NVME=1 00:13:28.484 ++ SPDK_TEST_FTL=1 00:13:28.484 ++ SPDK_TEST_ISAL=1 00:13:28.484 ++ SPDK_RUN_ASAN=1 00:13:28.484 ++ SPDK_RUN_UBSAN=1 00:13:28.484 ++ SPDK_TEST_XNVME=1 00:13:28.484 ++ SPDK_TEST_NVME_FDP=1 00:13:28.484 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:28.484 ++ RUN_NIGHTLY=0 00:13:28.484 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:13:28.484 + [[ -n '' ]] 00:13:28.484 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:13:28.484 + for M in /var/spdk/build-*-manifest.txt 00:13:28.484 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:13:28.484 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:28.484 + for M in /var/spdk/build-*-manifest.txt 00:13:28.484 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:13:28.484 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:28.484 + for M in /var/spdk/build-*-manifest.txt 00:13:28.484 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:13:28.484 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:13:28.484 ++ uname 00:13:28.484 + [[ Linux == \L\i\n\u\x ]] 00:13:28.484 + sudo dmesg -T 00:13:28.484 + sudo dmesg --clear 00:13:28.484 + dmesg_pid=5242 00:13:28.484 + sudo dmesg -Tw 00:13:28.484 + [[ Fedora Linux == FreeBSD ]] 00:13:28.484 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:28.484 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:28.484 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:13:28.484 + [[ -x /usr/src/fio-static/fio ]] 00:13:28.484 + export FIO_BIN=/usr/src/fio-static/fio 00:13:28.484 + FIO_BIN=/usr/src/fio-static/fio 00:13:28.484 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:13:28.484 + [[ ! -v VFIO_QEMU_BIN ]] 00:13:28.484 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:13:28.484 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:28.484 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:28.484 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:13:28.484 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:28.484 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:28.484 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:13:28.484 Test configuration: 00:13:28.484 SPDK_RUN_FUNCTIONAL_TEST=1 00:13:28.484 SPDK_TEST_NVME=1 00:13:28.484 SPDK_TEST_FTL=1 00:13:28.484 SPDK_TEST_ISAL=1 00:13:28.484 SPDK_RUN_ASAN=1 00:13:28.484 SPDK_RUN_UBSAN=1 00:13:28.484 SPDK_TEST_XNVME=1 00:13:28.484 SPDK_TEST_NVME_FDP=1 00:13:28.484 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:28.744 RUN_NIGHTLY=0 16:28:04 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:13:28.744 16:28:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:28.744 16:28:04 -- scripts/common.sh@15 -- $ shopt -s extglob 00:13:28.744 16:28:04 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:13:28.744 16:28:04 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.744 16:28:04 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.744 16:28:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.744 16:28:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.744 16:28:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.744 16:28:04 -- paths/export.sh@5 -- $ export PATH 00:13:28.744 16:28:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.744 16:28:04 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:13:28.744 16:28:04 -- common/autobuild_common.sh@486 -- $ date +%s 00:13:28.744 16:28:04 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729182484.XXXXXX 00:13:28.744 16:28:04 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729182484.DcNcjV 00:13:28.744 16:28:04 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:13:28.744 16:28:04 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:13:28.744 16:28:04 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:13:28.744 16:28:04 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:13:28.744 16:28:04 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:13:28.744 16:28:04 -- common/autobuild_common.sh@502 -- $ get_config_params 00:13:28.744 16:28:04 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:13:28.744 16:28:04 -- common/autotest_common.sh@10 -- $ set +x 00:13:28.744 16:28:04 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:13:28.744 16:28:04 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:13:28.744 16:28:04 -- pm/common@17 -- $ local monitor 00:13:28.744 16:28:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:28.744 16:28:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:13:28.744 16:28:04 -- pm/common@25 -- $ sleep 1 00:13:28.744 16:28:04 -- pm/common@21 -- $ date +%s 00:13:28.744 16:28:04 -- pm/common@21 -- $ date +%s 00:13:28.744 16:28:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729182484 00:13:28.744 16:28:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1729182484 00:13:28.744 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729182484_collect-cpu-load.pm.log 00:13:28.744 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1729182484_collect-vmstat.pm.log 00:13:29.682 16:28:05 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:13:29.682 16:28:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:13:29.682 16:28:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:13:29.682 16:28:05 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:13:29.682 16:28:05 -- spdk/autobuild.sh@16 -- $ date -u 00:13:29.682 Thu Oct 17 04:28:05 PM UTC 2024 00:13:29.682 16:28:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:13:29.682 v25.01-pre-80-gc1dd46fc6 00:13:29.682 16:28:05 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:13:29.682 16:28:05 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:13:29.682 16:28:05 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:13:29.682 16:28:05 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:13:29.682 16:28:05 -- common/autotest_common.sh@10 -- $ set +x 00:13:29.682 ************************************ 00:13:29.682 START TEST asan 00:13:29.682 ************************************ 00:13:29.682 using asan 00:13:29.682 16:28:05 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:13:29.682 00:13:29.682 real 0m0.001s 00:13:29.682 user 0m0.000s 00:13:29.682 sys 0m0.000s 00:13:29.682 16:28:05 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:13:29.682 16:28:05 asan -- common/autotest_common.sh@10 -- $ set +x 00:13:29.682 ************************************ 00:13:29.682 END TEST asan 00:13:29.682 ************************************ 00:13:29.941 16:28:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:13:29.941 16:28:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:13:29.941 16:28:05 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:13:29.941 16:28:05 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:13:29.941 16:28:05 -- common/autotest_common.sh@10 -- $ set +x 00:13:29.941 ************************************ 00:13:29.941 START TEST ubsan 00:13:29.941 ************************************ 00:13:29.941 using ubsan 00:13:29.941 16:28:05 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:13:29.941 00:13:29.941 real 0m0.000s 00:13:29.941 user 0m0.000s 00:13:29.941 sys 0m0.000s 00:13:29.941 16:28:05 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:13:29.941 16:28:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:13:29.941 ************************************ 00:13:29.941 END TEST ubsan 00:13:29.941 ************************************ 00:13:29.941 16:28:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:13:29.941 16:28:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:13:29.941 16:28:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:13:29.941 16:28:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:13:29.941 16:28:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:13:29.941 16:28:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:13:29.941 16:28:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:13:29.941 16:28:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:13:29.941 16:28:06 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:13:29.941 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:29.941 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:30.508 Using 'verbs' RDMA provider 00:13:46.796 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:14:04.883 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:14:04.883 Creating mk/config.mk...done. 00:14:04.883 Creating mk/cc.flags.mk...done. 00:14:04.883 Type 'make' to build. 00:14:04.883 16:28:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:14:04.883 16:28:38 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:14:04.883 16:28:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:14:04.883 16:28:38 -- common/autotest_common.sh@10 -- $ set +x 00:14:04.883 ************************************ 00:14:04.883 START TEST make 00:14:04.883 ************************************ 00:14:04.883 16:28:38 make -- common/autotest_common.sh@1125 -- $ make -j10 00:14:04.883 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:14:04.883 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:14:04.883 meson setup builddir \ 00:14:04.883 -Dwith-libaio=enabled \ 00:14:04.883 -Dwith-liburing=enabled \ 00:14:04.883 -Dwith-libvfn=disabled \ 00:14:04.883 -Dwith-spdk=disabled \ 00:14:04.883 -Dexamples=false \ 00:14:04.883 -Dtests=false \ 00:14:04.883 -Dtools=false && \ 00:14:04.883 meson compile -C builddir && \ 00:14:04.883 cd -) 00:14:04.883 make[1]: Nothing to be done for 'all'. 00:14:05.450 The Meson build system 00:14:05.450 Version: 1.5.0 00:14:05.450 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:14:05.450 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:14:05.450 Build type: native build 00:14:05.450 Project name: xnvme 00:14:05.450 Project version: 0.7.5 00:14:05.450 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:14:05.450 C linker for the host machine: cc ld.bfd 2.40-14 00:14:05.450 Host machine cpu family: x86_64 00:14:05.450 Host machine cpu: x86_64 00:14:05.450 Message: host_machine.system: linux 00:14:05.450 Compiler for C supports arguments -Wno-missing-braces: YES 00:14:05.450 Compiler for C supports arguments -Wno-cast-function-type: YES 00:14:05.450 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:14:05.450 Run-time dependency threads found: YES 00:14:05.450 Has header "setupapi.h" : NO 00:14:05.450 Has header "linux/blkzoned.h" : YES 00:14:05.450 Has header "linux/blkzoned.h" : YES (cached) 00:14:05.450 Has header "libaio.h" : YES 00:14:05.450 Library aio found: YES 00:14:05.450 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:14:05.450 Run-time dependency liburing found: YES 2.2 00:14:05.450 Dependency libvfn skipped: feature with-libvfn disabled 00:14:05.450 Found CMake: /usr/bin/cmake (3.27.7) 00:14:05.450 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:14:05.450 Subproject spdk : skipped: feature with-spdk disabled 00:14:05.450 Run-time dependency appleframeworks found: NO (tried framework) 00:14:05.450 Run-time dependency appleframeworks found: NO (tried framework) 00:14:05.450 Library rt found: YES 00:14:05.450 Checking for function "clock_gettime" with dependency -lrt: YES 00:14:05.450 Configuring xnvme_config.h using configuration 00:14:05.450 Configuring xnvme.spec using configuration 00:14:05.450 Run-time dependency bash-completion found: YES 2.11 00:14:05.450 Message: Bash-completions: /usr/share/bash-completion/completions 00:14:05.450 Program cp found: YES (/usr/bin/cp) 00:14:05.450 Build targets in project: 3 00:14:05.450 00:14:05.450 xnvme 0.7.5 00:14:05.450 00:14:05.450 Subprojects 00:14:05.450 spdk : NO Feature 'with-spdk' disabled 00:14:05.450 00:14:05.450 User defined options 00:14:05.450 examples : false 00:14:05.450 tests : false 00:14:05.450 tools : false 00:14:05.450 with-libaio : enabled 00:14:05.450 with-liburing: enabled 00:14:05.450 with-libvfn : disabled 00:14:05.450 with-spdk : disabled 00:14:05.450 00:14:05.450 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:14:05.708 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:14:05.708 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:14:05.708 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:14:05.708 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:14:05.708 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:14:05.708 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:14:05.708 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:14:05.708 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:14:05.966 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:14:05.966 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:14:05.966 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:14:05.966 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:14:05.966 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:14:05.966 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:14:05.966 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:14:05.966 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:14:05.966 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:14:05.966 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:14:05.966 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:14:05.966 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:14:05.966 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:14:05.966 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:14:05.966 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:14:05.966 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:14:05.966 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:14:05.966 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:14:05.966 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:14:05.966 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:14:05.966 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:14:05.966 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:14:05.966 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:14:05.966 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:14:05.966 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:14:05.966 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:14:06.225 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:14:06.225 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:14:06.225 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:14:06.225 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:14:06.225 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:14:06.225 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:14:06.225 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:14:06.225 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:14:06.225 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:14:06.225 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:14:06.225 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:14:06.225 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:14:06.225 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:14:06.225 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:14:06.225 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:14:06.225 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:14:06.225 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:14:06.225 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:14:06.225 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:14:06.225 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:14:06.225 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:14:06.225 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:14:06.225 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:14:06.225 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:14:06.225 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:14:06.225 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:14:06.225 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:14:06.225 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:14:06.484 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:14:06.484 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:14:06.484 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:14:06.484 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:14:06.484 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:14:06.484 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:14:06.484 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:14:06.484 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:14:06.484 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:14:06.484 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:14:06.484 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:14:06.484 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:14:06.742 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:14:07.001 [75/76] Linking static target lib/libxnvme.a 00:14:07.001 [76/76] Linking target lib/libxnvme.so.0.7.5 00:14:07.001 INFO: autodetecting backend as ninja 00:14:07.001 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:14:07.001 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:14:15.124 The Meson build system 00:14:15.124 Version: 1.5.0 00:14:15.124 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:14:15.124 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:14:15.124 Build type: native build 00:14:15.124 Program cat found: YES (/usr/bin/cat) 00:14:15.124 Project name: DPDK 00:14:15.124 Project version: 24.03.0 00:14:15.124 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:14:15.124 C linker for the host machine: cc ld.bfd 2.40-14 00:14:15.124 Host machine cpu family: x86_64 00:14:15.124 Host machine cpu: x86_64 00:14:15.124 Message: ## Building in Developer Mode ## 00:14:15.124 Program pkg-config found: YES (/usr/bin/pkg-config) 00:14:15.125 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:14:15.125 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:14:15.125 Program python3 found: YES (/usr/bin/python3) 00:14:15.125 Program cat found: YES (/usr/bin/cat) 00:14:15.125 Compiler for C supports arguments -march=native: YES 00:14:15.125 Checking for size of "void *" : 8 00:14:15.125 Checking for size of "void *" : 8 (cached) 00:14:15.125 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:14:15.125 Library m found: YES 00:14:15.125 Library numa found: YES 00:14:15.125 Has header "numaif.h" : YES 00:14:15.125 Library fdt found: NO 00:14:15.125 Library execinfo found: NO 00:14:15.125 Has header "execinfo.h" : YES 00:14:15.125 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:14:15.125 Run-time dependency libarchive found: NO (tried pkgconfig) 00:14:15.125 Run-time dependency libbsd found: NO (tried pkgconfig) 00:14:15.125 Run-time dependency jansson found: NO (tried pkgconfig) 00:14:15.125 Run-time dependency openssl found: YES 3.1.1 00:14:15.125 Run-time dependency libpcap found: YES 1.10.4 00:14:15.125 Has header "pcap.h" with dependency libpcap: YES 00:14:15.125 Compiler for C supports arguments -Wcast-qual: YES 00:14:15.125 Compiler for C supports arguments -Wdeprecated: YES 00:14:15.125 Compiler for C supports arguments -Wformat: YES 00:14:15.125 Compiler for C supports arguments -Wformat-nonliteral: NO 00:14:15.125 Compiler for C supports arguments -Wformat-security: NO 00:14:15.125 Compiler for C supports arguments -Wmissing-declarations: YES 00:14:15.125 Compiler for C supports arguments -Wmissing-prototypes: YES 00:14:15.125 Compiler for C supports arguments -Wnested-externs: YES 00:14:15.125 Compiler for C supports arguments -Wold-style-definition: YES 00:14:15.125 Compiler for C supports arguments -Wpointer-arith: YES 00:14:15.125 Compiler for C supports arguments -Wsign-compare: YES 00:14:15.125 Compiler for C supports arguments -Wstrict-prototypes: YES 00:14:15.125 Compiler for C supports arguments -Wundef: YES 00:14:15.125 Compiler for C supports arguments -Wwrite-strings: YES 00:14:15.125 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:14:15.125 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:14:15.125 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:14:15.125 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:14:15.125 Program objdump found: YES (/usr/bin/objdump) 00:14:15.125 Compiler for C supports arguments -mavx512f: YES 00:14:15.125 Checking if "AVX512 checking" compiles: YES 00:14:15.125 Fetching value of define "__SSE4_2__" : 1 00:14:15.125 Fetching value of define "__AES__" : 1 00:14:15.125 Fetching value of define "__AVX__" : 1 00:14:15.125 Fetching value of define "__AVX2__" : 1 00:14:15.125 Fetching value of define "__AVX512BW__" : 1 00:14:15.125 Fetching value of define "__AVX512CD__" : 1 00:14:15.125 Fetching value of define "__AVX512DQ__" : 1 00:14:15.125 Fetching value of define "__AVX512F__" : 1 00:14:15.125 Fetching value of define "__AVX512VL__" : 1 00:14:15.125 Fetching value of define "__PCLMUL__" : 1 00:14:15.125 Fetching value of define "__RDRND__" : 1 00:14:15.125 Fetching value of define "__RDSEED__" : 1 00:14:15.125 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:14:15.125 Fetching value of define "__znver1__" : (undefined) 00:14:15.125 Fetching value of define "__znver2__" : (undefined) 00:14:15.125 Fetching value of define "__znver3__" : (undefined) 00:14:15.125 Fetching value of define "__znver4__" : (undefined) 00:14:15.125 Library asan found: YES 00:14:15.125 Compiler for C supports arguments -Wno-format-truncation: YES 00:14:15.125 Message: lib/log: Defining dependency "log" 00:14:15.125 Message: lib/kvargs: Defining dependency "kvargs" 00:14:15.125 Message: lib/telemetry: Defining dependency "telemetry" 00:14:15.125 Library rt found: YES 00:14:15.125 Checking for function "getentropy" : NO 00:14:15.125 Message: lib/eal: Defining dependency "eal" 00:14:15.125 Message: lib/ring: Defining dependency "ring" 00:14:15.125 Message: lib/rcu: Defining dependency "rcu" 00:14:15.125 Message: lib/mempool: Defining dependency "mempool" 00:14:15.125 Message: lib/mbuf: Defining dependency "mbuf" 00:14:15.125 Fetching value of define "__PCLMUL__" : 1 (cached) 00:14:15.125 Fetching value of define "__AVX512F__" : 1 (cached) 00:14:15.125 Fetching value of define "__AVX512BW__" : 1 (cached) 00:14:15.125 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:14:15.125 Fetching value of define "__AVX512VL__" : 1 (cached) 00:14:15.125 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:14:15.125 Compiler for C supports arguments -mpclmul: YES 00:14:15.125 Compiler for C supports arguments -maes: YES 00:14:15.125 Compiler for C supports arguments -mavx512f: YES (cached) 00:14:15.125 Compiler for C supports arguments -mavx512bw: YES 00:14:15.125 Compiler for C supports arguments -mavx512dq: YES 00:14:15.125 Compiler for C supports arguments -mavx512vl: YES 00:14:15.125 Compiler for C supports arguments -mvpclmulqdq: YES 00:14:15.125 Compiler for C supports arguments -mavx2: YES 00:14:15.125 Compiler for C supports arguments -mavx: YES 00:14:15.125 Message: lib/net: Defining dependency "net" 00:14:15.125 Message: lib/meter: Defining dependency "meter" 00:14:15.125 Message: lib/ethdev: Defining dependency "ethdev" 00:14:15.125 Message: lib/pci: Defining dependency "pci" 00:14:15.125 Message: lib/cmdline: Defining dependency "cmdline" 00:14:15.125 Message: lib/hash: Defining dependency "hash" 00:14:15.125 Message: lib/timer: Defining dependency "timer" 00:14:15.125 Message: lib/compressdev: Defining dependency "compressdev" 00:14:15.125 Message: lib/cryptodev: Defining dependency "cryptodev" 00:14:15.125 Message: lib/dmadev: Defining dependency "dmadev" 00:14:15.125 Compiler for C supports arguments -Wno-cast-qual: YES 00:14:15.125 Message: lib/power: Defining dependency "power" 00:14:15.125 Message: lib/reorder: Defining dependency "reorder" 00:14:15.125 Message: lib/security: Defining dependency "security" 00:14:15.125 Has header "linux/userfaultfd.h" : YES 00:14:15.125 Has header "linux/vduse.h" : YES 00:14:15.125 Message: lib/vhost: Defining dependency "vhost" 00:14:15.125 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:14:15.125 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:14:15.125 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:14:15.125 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:14:15.125 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:14:15.125 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:14:15.125 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:14:15.125 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:14:15.125 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:14:15.125 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:14:15.125 Program doxygen found: YES (/usr/local/bin/doxygen) 00:14:15.125 Configuring doxy-api-html.conf using configuration 00:14:15.125 Configuring doxy-api-man.conf using configuration 00:14:15.125 Program mandb found: YES (/usr/bin/mandb) 00:14:15.125 Program sphinx-build found: NO 00:14:15.125 Configuring rte_build_config.h using configuration 00:14:15.125 Message: 00:14:15.125 ================= 00:14:15.125 Applications Enabled 00:14:15.125 ================= 00:14:15.125 00:14:15.125 apps: 00:14:15.125 00:14:15.125 00:14:15.125 Message: 00:14:15.125 ================= 00:14:15.125 Libraries Enabled 00:14:15.125 ================= 00:14:15.125 00:14:15.125 libs: 00:14:15.125 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:14:15.125 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:14:15.125 cryptodev, dmadev, power, reorder, security, vhost, 00:14:15.125 00:14:15.125 Message: 00:14:15.125 =============== 00:14:15.125 Drivers Enabled 00:14:15.125 =============== 00:14:15.125 00:14:15.125 common: 00:14:15.125 00:14:15.125 bus: 00:14:15.125 pci, vdev, 00:14:15.125 mempool: 00:14:15.125 ring, 00:14:15.125 dma: 00:14:15.125 00:14:15.125 net: 00:14:15.125 00:14:15.125 crypto: 00:14:15.125 00:14:15.125 compress: 00:14:15.125 00:14:15.125 vdpa: 00:14:15.125 00:14:15.125 00:14:15.125 Message: 00:14:15.125 ================= 00:14:15.125 Content Skipped 00:14:15.125 ================= 00:14:15.125 00:14:15.125 apps: 00:14:15.125 dumpcap: explicitly disabled via build config 00:14:15.125 graph: explicitly disabled via build config 00:14:15.125 pdump: explicitly disabled via build config 00:14:15.125 proc-info: explicitly disabled via build config 00:14:15.125 test-acl: explicitly disabled via build config 00:14:15.125 test-bbdev: explicitly disabled via build config 00:14:15.125 test-cmdline: explicitly disabled via build config 00:14:15.125 test-compress-perf: explicitly disabled via build config 00:14:15.125 test-crypto-perf: explicitly disabled via build config 00:14:15.125 test-dma-perf: explicitly disabled via build config 00:14:15.125 test-eventdev: explicitly disabled via build config 00:14:15.125 test-fib: explicitly disabled via build config 00:14:15.125 test-flow-perf: explicitly disabled via build config 00:14:15.125 test-gpudev: explicitly disabled via build config 00:14:15.125 test-mldev: explicitly disabled via build config 00:14:15.125 test-pipeline: explicitly disabled via build config 00:14:15.125 test-pmd: explicitly disabled via build config 00:14:15.125 test-regex: explicitly disabled via build config 00:14:15.125 test-sad: explicitly disabled via build config 00:14:15.125 test-security-perf: explicitly disabled via build config 00:14:15.125 00:14:15.125 libs: 00:14:15.125 argparse: explicitly disabled via build config 00:14:15.125 metrics: explicitly disabled via build config 00:14:15.125 acl: explicitly disabled via build config 00:14:15.125 bbdev: explicitly disabled via build config 00:14:15.125 bitratestats: explicitly disabled via build config 00:14:15.125 bpf: explicitly disabled via build config 00:14:15.125 cfgfile: explicitly disabled via build config 00:14:15.125 distributor: explicitly disabled via build config 00:14:15.125 efd: explicitly disabled via build config 00:14:15.125 eventdev: explicitly disabled via build config 00:14:15.125 dispatcher: explicitly disabled via build config 00:14:15.125 gpudev: explicitly disabled via build config 00:14:15.125 gro: explicitly disabled via build config 00:14:15.125 gso: explicitly disabled via build config 00:14:15.125 ip_frag: explicitly disabled via build config 00:14:15.125 jobstats: explicitly disabled via build config 00:14:15.125 latencystats: explicitly disabled via build config 00:14:15.125 lpm: explicitly disabled via build config 00:14:15.125 member: explicitly disabled via build config 00:14:15.125 pcapng: explicitly disabled via build config 00:14:15.125 rawdev: explicitly disabled via build config 00:14:15.126 regexdev: explicitly disabled via build config 00:14:15.126 mldev: explicitly disabled via build config 00:14:15.126 rib: explicitly disabled via build config 00:14:15.126 sched: explicitly disabled via build config 00:14:15.126 stack: explicitly disabled via build config 00:14:15.126 ipsec: explicitly disabled via build config 00:14:15.126 pdcp: explicitly disabled via build config 00:14:15.126 fib: explicitly disabled via build config 00:14:15.126 port: explicitly disabled via build config 00:14:15.126 pdump: explicitly disabled via build config 00:14:15.126 table: explicitly disabled via build config 00:14:15.126 pipeline: explicitly disabled via build config 00:14:15.126 graph: explicitly disabled via build config 00:14:15.126 node: explicitly disabled via build config 00:14:15.126 00:14:15.126 drivers: 00:14:15.126 common/cpt: not in enabled drivers build config 00:14:15.126 common/dpaax: not in enabled drivers build config 00:14:15.126 common/iavf: not in enabled drivers build config 00:14:15.126 common/idpf: not in enabled drivers build config 00:14:15.126 common/ionic: not in enabled drivers build config 00:14:15.126 common/mvep: not in enabled drivers build config 00:14:15.126 common/octeontx: not in enabled drivers build config 00:14:15.126 bus/auxiliary: not in enabled drivers build config 00:14:15.126 bus/cdx: not in enabled drivers build config 00:14:15.126 bus/dpaa: not in enabled drivers build config 00:14:15.126 bus/fslmc: not in enabled drivers build config 00:14:15.126 bus/ifpga: not in enabled drivers build config 00:14:15.126 bus/platform: not in enabled drivers build config 00:14:15.126 bus/uacce: not in enabled drivers build config 00:14:15.126 bus/vmbus: not in enabled drivers build config 00:14:15.126 common/cnxk: not in enabled drivers build config 00:14:15.126 common/mlx5: not in enabled drivers build config 00:14:15.126 common/nfp: not in enabled drivers build config 00:14:15.126 common/nitrox: not in enabled drivers build config 00:14:15.126 common/qat: not in enabled drivers build config 00:14:15.126 common/sfc_efx: not in enabled drivers build config 00:14:15.126 mempool/bucket: not in enabled drivers build config 00:14:15.126 mempool/cnxk: not in enabled drivers build config 00:14:15.126 mempool/dpaa: not in enabled drivers build config 00:14:15.126 mempool/dpaa2: not in enabled drivers build config 00:14:15.126 mempool/octeontx: not in enabled drivers build config 00:14:15.126 mempool/stack: not in enabled drivers build config 00:14:15.126 dma/cnxk: not in enabled drivers build config 00:14:15.126 dma/dpaa: not in enabled drivers build config 00:14:15.126 dma/dpaa2: not in enabled drivers build config 00:14:15.126 dma/hisilicon: not in enabled drivers build config 00:14:15.126 dma/idxd: not in enabled drivers build config 00:14:15.126 dma/ioat: not in enabled drivers build config 00:14:15.126 dma/skeleton: not in enabled drivers build config 00:14:15.126 net/af_packet: not in enabled drivers build config 00:14:15.126 net/af_xdp: not in enabled drivers build config 00:14:15.126 net/ark: not in enabled drivers build config 00:14:15.126 net/atlantic: not in enabled drivers build config 00:14:15.126 net/avp: not in enabled drivers build config 00:14:15.126 net/axgbe: not in enabled drivers build config 00:14:15.126 net/bnx2x: not in enabled drivers build config 00:14:15.126 net/bnxt: not in enabled drivers build config 00:14:15.126 net/bonding: not in enabled drivers build config 00:14:15.126 net/cnxk: not in enabled drivers build config 00:14:15.126 net/cpfl: not in enabled drivers build config 00:14:15.126 net/cxgbe: not in enabled drivers build config 00:14:15.126 net/dpaa: not in enabled drivers build config 00:14:15.126 net/dpaa2: not in enabled drivers build config 00:14:15.126 net/e1000: not in enabled drivers build config 00:14:15.126 net/ena: not in enabled drivers build config 00:14:15.126 net/enetc: not in enabled drivers build config 00:14:15.126 net/enetfec: not in enabled drivers build config 00:14:15.126 net/enic: not in enabled drivers build config 00:14:15.126 net/failsafe: not in enabled drivers build config 00:14:15.126 net/fm10k: not in enabled drivers build config 00:14:15.126 net/gve: not in enabled drivers build config 00:14:15.126 net/hinic: not in enabled drivers build config 00:14:15.126 net/hns3: not in enabled drivers build config 00:14:15.126 net/i40e: not in enabled drivers build config 00:14:15.126 net/iavf: not in enabled drivers build config 00:14:15.126 net/ice: not in enabled drivers build config 00:14:15.126 net/idpf: not in enabled drivers build config 00:14:15.126 net/igc: not in enabled drivers build config 00:14:15.126 net/ionic: not in enabled drivers build config 00:14:15.126 net/ipn3ke: not in enabled drivers build config 00:14:15.126 net/ixgbe: not in enabled drivers build config 00:14:15.126 net/mana: not in enabled drivers build config 00:14:15.126 net/memif: not in enabled drivers build config 00:14:15.126 net/mlx4: not in enabled drivers build config 00:14:15.126 net/mlx5: not in enabled drivers build config 00:14:15.126 net/mvneta: not in enabled drivers build config 00:14:15.126 net/mvpp2: not in enabled drivers build config 00:14:15.126 net/netvsc: not in enabled drivers build config 00:14:15.126 net/nfb: not in enabled drivers build config 00:14:15.126 net/nfp: not in enabled drivers build config 00:14:15.126 net/ngbe: not in enabled drivers build config 00:14:15.126 net/null: not in enabled drivers build config 00:14:15.126 net/octeontx: not in enabled drivers build config 00:14:15.126 net/octeon_ep: not in enabled drivers build config 00:14:15.126 net/pcap: not in enabled drivers build config 00:14:15.126 net/pfe: not in enabled drivers build config 00:14:15.126 net/qede: not in enabled drivers build config 00:14:15.126 net/ring: not in enabled drivers build config 00:14:15.126 net/sfc: not in enabled drivers build config 00:14:15.126 net/softnic: not in enabled drivers build config 00:14:15.126 net/tap: not in enabled drivers build config 00:14:15.126 net/thunderx: not in enabled drivers build config 00:14:15.126 net/txgbe: not in enabled drivers build config 00:14:15.126 net/vdev_netvsc: not in enabled drivers build config 00:14:15.126 net/vhost: not in enabled drivers build config 00:14:15.126 net/virtio: not in enabled drivers build config 00:14:15.126 net/vmxnet3: not in enabled drivers build config 00:14:15.126 raw/*: missing internal dependency, "rawdev" 00:14:15.126 crypto/armv8: not in enabled drivers build config 00:14:15.126 crypto/bcmfs: not in enabled drivers build config 00:14:15.126 crypto/caam_jr: not in enabled drivers build config 00:14:15.126 crypto/ccp: not in enabled drivers build config 00:14:15.126 crypto/cnxk: not in enabled drivers build config 00:14:15.126 crypto/dpaa_sec: not in enabled drivers build config 00:14:15.126 crypto/dpaa2_sec: not in enabled drivers build config 00:14:15.126 crypto/ipsec_mb: not in enabled drivers build config 00:14:15.126 crypto/mlx5: not in enabled drivers build config 00:14:15.126 crypto/mvsam: not in enabled drivers build config 00:14:15.126 crypto/nitrox: not in enabled drivers build config 00:14:15.126 crypto/null: not in enabled drivers build config 00:14:15.126 crypto/octeontx: not in enabled drivers build config 00:14:15.126 crypto/openssl: not in enabled drivers build config 00:14:15.126 crypto/scheduler: not in enabled drivers build config 00:14:15.126 crypto/uadk: not in enabled drivers build config 00:14:15.126 crypto/virtio: not in enabled drivers build config 00:14:15.126 compress/isal: not in enabled drivers build config 00:14:15.126 compress/mlx5: not in enabled drivers build config 00:14:15.126 compress/nitrox: not in enabled drivers build config 00:14:15.126 compress/octeontx: not in enabled drivers build config 00:14:15.126 compress/zlib: not in enabled drivers build config 00:14:15.126 regex/*: missing internal dependency, "regexdev" 00:14:15.126 ml/*: missing internal dependency, "mldev" 00:14:15.126 vdpa/ifc: not in enabled drivers build config 00:14:15.126 vdpa/mlx5: not in enabled drivers build config 00:14:15.126 vdpa/nfp: not in enabled drivers build config 00:14:15.126 vdpa/sfc: not in enabled drivers build config 00:14:15.126 event/*: missing internal dependency, "eventdev" 00:14:15.126 baseband/*: missing internal dependency, "bbdev" 00:14:15.126 gpu/*: missing internal dependency, "gpudev" 00:14:15.126 00:14:15.126 00:14:15.126 Build targets in project: 85 00:14:15.126 00:14:15.126 DPDK 24.03.0 00:14:15.126 00:14:15.126 User defined options 00:14:15.126 buildtype : debug 00:14:15.126 default_library : shared 00:14:15.126 libdir : lib 00:14:15.126 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:15.126 b_sanitize : address 00:14:15.126 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:14:15.126 c_link_args : 00:14:15.126 cpu_instruction_set: native 00:14:15.126 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:14:15.126 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:14:15.126 enable_docs : false 00:14:15.126 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:14:15.126 enable_kmods : false 00:14:15.126 max_lcores : 128 00:14:15.126 tests : false 00:14:15.126 00:14:15.126 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:14:15.126 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:14:15.126 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:14:15.126 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:14:15.126 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:14:15.126 [4/268] Linking static target lib/librte_kvargs.a 00:14:15.126 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:14:15.126 [6/268] Linking static target lib/librte_log.a 00:14:15.386 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:14:15.386 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:14:15.644 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:14:15.644 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:14:15.644 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:14:15.644 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:14:15.644 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:14:15.644 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:14:15.903 [15/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:14:15.903 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:14:15.903 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:14:15.903 [18/268] Linking static target lib/librte_telemetry.a 00:14:16.162 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:14:16.162 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:14:16.162 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:14:16.162 [22/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:14:16.162 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:14:16.162 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:14:16.162 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:14:16.162 [26/268] Linking target lib/librte_log.so.24.1 00:14:16.421 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:14:16.421 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:14:16.421 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:14:16.680 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:14:16.680 [31/268] Linking target lib/librte_kvargs.so.24.1 00:14:16.680 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:14:16.680 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:14:16.680 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:14:16.680 [35/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:14:16.680 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:14:16.680 [37/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:14:16.680 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:14:16.680 [39/268] Linking target lib/librte_telemetry.so.24.1 00:14:16.939 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:14:16.939 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:14:16.939 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:14:16.939 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:14:16.939 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:14:17.198 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:14:17.198 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:14:17.198 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:14:17.457 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:14:17.457 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:14:17.457 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:14:17.457 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:14:17.457 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:14:17.457 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:14:17.716 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:14:17.716 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:14:17.716 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:14:17.716 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:14:17.974 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:14:17.974 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:14:17.974 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:14:17.974 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:14:17.974 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:14:17.974 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:14:17.974 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:14:18.233 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:14:18.233 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:14:18.492 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:14:18.492 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:14:18.492 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:14:18.492 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:14:18.492 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:14:18.764 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:14:18.764 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:14:18.764 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:14:18.764 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:14:18.764 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:14:18.764 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:14:18.764 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:14:18.764 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:14:19.041 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:14:19.041 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:14:19.041 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:14:19.041 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:14:19.041 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:14:19.041 [85/268] Linking static target lib/librte_ring.a 00:14:19.041 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:14:19.303 [87/268] Linking static target lib/librte_eal.a 00:14:19.303 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:14:19.303 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:14:19.303 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:14:19.303 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:14:19.561 [92/268] Linking static target lib/librte_rcu.a 00:14:19.561 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:14:19.561 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:14:19.561 [95/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:14:19.820 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:14:19.820 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:14:19.820 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:14:19.820 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:14:19.820 [100/268] Linking static target lib/librte_mempool.a 00:14:19.820 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:14:20.079 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:14:20.079 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:14:20.079 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:14:20.079 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:14:20.079 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:14:20.079 [107/268] Linking static target lib/librte_net.a 00:14:20.079 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:14:20.079 [109/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:14:20.079 [110/268] Linking static target lib/librte_meter.a 00:14:20.079 [111/268] Linking static target lib/librte_mbuf.a 00:14:20.339 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:14:20.598 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:14:20.598 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:14:20.598 [115/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:14:20.598 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:14:20.598 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:14:20.857 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:14:20.858 [119/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:14:21.116 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:14:21.116 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:14:21.116 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:14:21.375 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:14:21.633 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:14:21.633 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:14:21.633 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:14:21.633 [127/268] Linking static target lib/librte_pci.a 00:14:21.633 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:14:21.633 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:14:21.892 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:14:21.892 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:14:21.892 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:14:21.892 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:14:21.892 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:14:21.892 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:14:21.892 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:14:21.892 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:14:21.892 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:14:21.892 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:14:21.892 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:14:22.150 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:14:22.150 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:14:22.150 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:14:22.150 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:14:22.150 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:14:22.150 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:14:22.150 [147/268] Linking static target lib/librte_cmdline.a 00:14:22.408 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:14:22.667 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:14:22.667 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:14:22.667 [151/268] Linking static target lib/librte_timer.a 00:14:22.667 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:14:22.667 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:14:22.926 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:14:22.926 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:14:22.926 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:14:23.184 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:14:23.184 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:14:23.184 [159/268] Linking static target lib/librte_hash.a 00:14:23.184 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:14:23.184 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:14:23.185 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:14:23.444 [163/268] Linking static target lib/librte_ethdev.a 00:14:23.444 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:14:23.444 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:14:23.444 [166/268] Linking static target lib/librte_dmadev.a 00:14:23.444 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:14:23.444 [168/268] Linking static target lib/librte_compressdev.a 00:14:23.444 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:14:23.703 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:14:23.703 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:14:23.703 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:14:23.703 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:14:23.962 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:14:23.962 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:14:24.220 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:14:24.220 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:14:24.220 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:14:24.480 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:14:24.480 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:14:24.480 [181/268] Linking static target lib/librte_cryptodev.a 00:14:24.480 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:14:24.480 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:24.480 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:14:24.480 [185/268] Linking static target lib/librte_power.a 00:14:24.480 [186/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:24.740 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:14:24.999 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:14:24.999 [189/268] Linking static target lib/librte_reorder.a 00:14:24.999 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:14:24.999 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:14:24.999 [192/268] Linking static target lib/librte_security.a 00:14:24.999 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:14:25.621 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:14:25.621 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:14:25.621 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:14:25.880 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:14:25.880 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:14:25.880 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:14:26.140 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:14:26.140 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:14:26.399 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:14:26.399 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:14:26.399 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:14:26.399 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:14:26.399 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:14:26.657 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:14:26.657 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:14:26.657 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:14:26.657 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:14:26.917 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:26.917 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:14:26.917 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:14:26.917 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:14:26.917 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:14:26.917 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:14:26.917 [217/268] Linking static target drivers/librte_bus_vdev.a 00:14:26.917 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:14:26.917 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:14:26.917 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:14:26.917 [221/268] Linking static target drivers/librte_bus_pci.a 00:14:27.176 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:14:27.176 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:14:27.176 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:14:27.176 [225/268] Linking static target drivers/librte_mempool_ring.a 00:14:27.176 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:27.437 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:14:28.374 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:14:32.568 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:14:32.568 [230/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:14:32.568 [231/268] Linking target lib/librte_eal.so.24.1 00:14:32.568 [232/268] Linking static target lib/librte_vhost.a 00:14:32.568 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:14:32.568 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:14:32.568 [235/268] Linking target lib/librte_timer.so.24.1 00:14:32.568 [236/268] Linking target lib/librte_pci.so.24.1 00:14:32.568 [237/268] Linking target lib/librte_ring.so.24.1 00:14:32.568 [238/268] Linking target lib/librte_dmadev.so.24.1 00:14:32.568 [239/268] Linking target lib/librte_meter.so.24.1 00:14:32.568 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:14:32.568 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:14:32.568 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:14:32.568 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:14:32.568 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:14:32.568 [245/268] Linking target lib/librte_rcu.so.24.1 00:14:32.568 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:14:32.568 [247/268] Linking target lib/librte_mempool.so.24.1 00:14:32.568 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:14:32.568 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:14:32.568 [250/268] Linking target lib/librte_mbuf.so.24.1 00:14:32.568 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:14:32.568 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:14:32.568 [253/268] Linking target lib/librte_net.so.24.1 00:14:32.569 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:14:32.569 [255/268] Linking target lib/librte_reorder.so.24.1 00:14:32.827 [256/268] Linking target lib/librte_compressdev.so.24.1 00:14:32.827 [257/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:14:32.827 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:14:32.827 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:14:32.827 [260/268] Linking target lib/librte_cmdline.so.24.1 00:14:32.827 [261/268] Linking target lib/librte_hash.so.24.1 00:14:32.827 [262/268] Linking target lib/librte_security.so.24.1 00:14:32.827 [263/268] Linking target lib/librte_ethdev.so.24.1 00:14:33.086 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:14:33.086 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:14:33.086 [266/268] Linking target lib/librte_power.so.24.1 00:14:34.023 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:14:34.023 [268/268] Linking target lib/librte_vhost.so.24.1 00:14:34.023 INFO: autodetecting backend as ninja 00:14:34.023 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:14:52.151 CC lib/ut_mock/mock.o 00:14:52.151 CC lib/log/log_flags.o 00:14:52.151 CC lib/log/log.o 00:14:52.151 CC lib/log/log_deprecated.o 00:14:52.151 CC lib/ut/ut.o 00:14:52.151 LIB libspdk_log.a 00:14:52.151 LIB libspdk_ut_mock.a 00:14:52.151 LIB libspdk_ut.a 00:14:52.151 SO libspdk_ut_mock.so.6.0 00:14:52.151 SO libspdk_log.so.7.1 00:14:52.151 SO libspdk_ut.so.2.0 00:14:52.151 SYMLINK libspdk_ut_mock.so 00:14:52.151 SYMLINK libspdk_log.so 00:14:52.151 SYMLINK libspdk_ut.so 00:14:52.415 CC lib/dma/dma.o 00:14:52.415 CC lib/ioat/ioat.o 00:14:52.415 CXX lib/trace_parser/trace.o 00:14:52.415 CC lib/util/base64.o 00:14:52.415 CC lib/util/bit_array.o 00:14:52.415 CC lib/util/crc32c.o 00:14:52.415 CC lib/util/cpuset.o 00:14:52.415 CC lib/util/crc32.o 00:14:52.415 CC lib/util/crc16.o 00:14:52.415 CC lib/vfio_user/host/vfio_user_pci.o 00:14:52.415 CC lib/vfio_user/host/vfio_user.o 00:14:52.415 CC lib/util/crc32_ieee.o 00:14:52.415 CC lib/util/crc64.o 00:14:52.415 LIB libspdk_dma.a 00:14:52.415 CC lib/util/dif.o 00:14:52.415 SO libspdk_dma.so.5.0 00:14:52.415 CC lib/util/fd.o 00:14:52.674 LIB libspdk_ioat.a 00:14:52.674 SYMLINK libspdk_dma.so 00:14:52.674 CC lib/util/fd_group.o 00:14:52.674 CC lib/util/file.o 00:14:52.674 CC lib/util/hexlify.o 00:14:52.674 SO libspdk_ioat.so.7.0 00:14:52.674 CC lib/util/iov.o 00:14:52.674 SYMLINK libspdk_ioat.so 00:14:52.674 CC lib/util/math.o 00:14:52.674 CC lib/util/net.o 00:14:52.674 LIB libspdk_vfio_user.a 00:14:52.674 CC lib/util/pipe.o 00:14:52.674 SO libspdk_vfio_user.so.5.0 00:14:52.674 CC lib/util/strerror_tls.o 00:14:52.674 CC lib/util/string.o 00:14:52.674 SYMLINK libspdk_vfio_user.so 00:14:52.674 CC lib/util/uuid.o 00:14:52.674 CC lib/util/xor.o 00:14:52.932 CC lib/util/zipf.o 00:14:52.932 CC lib/util/md5.o 00:14:53.190 LIB libspdk_util.a 00:14:53.190 SO libspdk_util.so.10.0 00:14:53.190 LIB libspdk_trace_parser.a 00:14:53.449 SO libspdk_trace_parser.so.6.0 00:14:53.449 SYMLINK libspdk_util.so 00:14:53.449 SYMLINK libspdk_trace_parser.so 00:14:53.449 CC lib/rdma_provider/common.o 00:14:53.449 CC lib/rdma_provider/rdma_provider_verbs.o 00:14:53.449 CC lib/idxd/idxd.o 00:14:53.449 CC lib/idxd/idxd_user.o 00:14:53.708 CC lib/idxd/idxd_kernel.o 00:14:53.708 CC lib/env_dpdk/env.o 00:14:53.708 CC lib/rdma_utils/rdma_utils.o 00:14:53.708 CC lib/vmd/vmd.o 00:14:53.708 CC lib/conf/conf.o 00:14:53.708 CC lib/json/json_parse.o 00:14:53.708 CC lib/vmd/led.o 00:14:53.708 LIB libspdk_rdma_provider.a 00:14:53.708 CC lib/env_dpdk/memory.o 00:14:53.708 SO libspdk_rdma_provider.so.6.0 00:14:53.708 LIB libspdk_conf.a 00:14:53.708 CC lib/json/json_util.o 00:14:53.967 SO libspdk_conf.so.6.0 00:14:53.967 CC lib/json/json_write.o 00:14:53.967 SYMLINK libspdk_rdma_provider.so 00:14:53.967 LIB libspdk_rdma_utils.a 00:14:53.967 CC lib/env_dpdk/pci.o 00:14:53.967 SO libspdk_rdma_utils.so.1.0 00:14:53.967 CC lib/env_dpdk/init.o 00:14:53.967 SYMLINK libspdk_conf.so 00:14:53.967 CC lib/env_dpdk/threads.o 00:14:53.967 SYMLINK libspdk_rdma_utils.so 00:14:53.967 CC lib/env_dpdk/pci_ioat.o 00:14:53.967 CC lib/env_dpdk/pci_virtio.o 00:14:53.967 CC lib/env_dpdk/pci_vmd.o 00:14:54.226 CC lib/env_dpdk/pci_idxd.o 00:14:54.226 LIB libspdk_json.a 00:14:54.226 SO libspdk_json.so.6.0 00:14:54.226 CC lib/env_dpdk/pci_event.o 00:14:54.226 CC lib/env_dpdk/sigbus_handler.o 00:14:54.226 CC lib/env_dpdk/pci_dpdk.o 00:14:54.226 LIB libspdk_idxd.a 00:14:54.226 SYMLINK libspdk_json.so 00:14:54.226 CC lib/env_dpdk/pci_dpdk_2207.o 00:14:54.226 CC lib/env_dpdk/pci_dpdk_2211.o 00:14:54.226 SO libspdk_idxd.so.12.1 00:14:54.226 LIB libspdk_vmd.a 00:14:54.226 SO libspdk_vmd.so.6.0 00:14:54.486 SYMLINK libspdk_idxd.so 00:14:54.486 SYMLINK libspdk_vmd.so 00:14:54.486 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:14:54.486 CC lib/jsonrpc/jsonrpc_client.o 00:14:54.486 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:14:54.486 CC lib/jsonrpc/jsonrpc_server.o 00:14:54.745 LIB libspdk_jsonrpc.a 00:14:54.745 SO libspdk_jsonrpc.so.6.0 00:14:54.745 SYMLINK libspdk_jsonrpc.so 00:14:55.313 LIB libspdk_env_dpdk.a 00:14:55.313 CC lib/rpc/rpc.o 00:14:55.313 SO libspdk_env_dpdk.so.15.0 00:14:55.571 LIB libspdk_rpc.a 00:14:55.571 SYMLINK libspdk_env_dpdk.so 00:14:55.571 SO libspdk_rpc.so.6.0 00:14:55.571 SYMLINK libspdk_rpc.so 00:14:56.140 CC lib/keyring/keyring.o 00:14:56.140 CC lib/keyring/keyring_rpc.o 00:14:56.140 CC lib/notify/notify.o 00:14:56.140 CC lib/notify/notify_rpc.o 00:14:56.140 CC lib/trace/trace.o 00:14:56.140 CC lib/trace/trace_flags.o 00:14:56.140 CC lib/trace/trace_rpc.o 00:14:56.140 LIB libspdk_notify.a 00:14:56.140 SO libspdk_notify.so.6.0 00:14:56.398 LIB libspdk_keyring.a 00:14:56.398 LIB libspdk_trace.a 00:14:56.398 SYMLINK libspdk_notify.so 00:14:56.398 SO libspdk_keyring.so.2.0 00:14:56.398 SO libspdk_trace.so.11.0 00:14:56.398 SYMLINK libspdk_keyring.so 00:14:56.398 SYMLINK libspdk_trace.so 00:14:57.010 CC lib/thread/iobuf.o 00:14:57.010 CC lib/thread/thread.o 00:14:57.010 CC lib/sock/sock.o 00:14:57.010 CC lib/sock/sock_rpc.o 00:14:57.270 LIB libspdk_sock.a 00:14:57.270 SO libspdk_sock.so.10.0 00:14:57.528 SYMLINK libspdk_sock.so 00:14:57.786 CC lib/nvme/nvme_ctrlr_cmd.o 00:14:57.786 CC lib/nvme/nvme_ctrlr.o 00:14:57.786 CC lib/nvme/nvme_fabric.o 00:14:57.786 CC lib/nvme/nvme_ns_cmd.o 00:14:57.786 CC lib/nvme/nvme_ns.o 00:14:57.786 CC lib/nvme/nvme_pcie_common.o 00:14:57.786 CC lib/nvme/nvme_pcie.o 00:14:57.786 CC lib/nvme/nvme_qpair.o 00:14:57.786 CC lib/nvme/nvme.o 00:14:58.722 CC lib/nvme/nvme_quirks.o 00:14:58.722 LIB libspdk_thread.a 00:14:58.722 CC lib/nvme/nvme_transport.o 00:14:58.722 SO libspdk_thread.so.11.0 00:14:58.722 CC lib/nvme/nvme_discovery.o 00:14:58.722 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:14:58.722 SYMLINK libspdk_thread.so 00:14:58.722 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:14:58.722 CC lib/nvme/nvme_tcp.o 00:14:58.722 CC lib/nvme/nvme_opal.o 00:14:58.980 CC lib/nvme/nvme_io_msg.o 00:14:58.981 CC lib/nvme/nvme_poll_group.o 00:14:59.239 CC lib/nvme/nvme_zns.o 00:14:59.239 CC lib/nvme/nvme_stubs.o 00:14:59.239 CC lib/nvme/nvme_auth.o 00:14:59.498 CC lib/nvme/nvme_cuse.o 00:14:59.498 CC lib/accel/accel.o 00:14:59.498 CC lib/nvme/nvme_rdma.o 00:14:59.756 CC lib/blob/blobstore.o 00:14:59.756 CC lib/init/json_config.o 00:15:00.014 CC lib/virtio/virtio.o 00:15:00.014 CC lib/fsdev/fsdev.o 00:15:00.014 CC lib/init/subsystem.o 00:15:00.273 CC lib/init/subsystem_rpc.o 00:15:00.273 CC lib/virtio/virtio_vhost_user.o 00:15:00.273 CC lib/virtio/virtio_vfio_user.o 00:15:00.531 CC lib/virtio/virtio_pci.o 00:15:00.531 CC lib/init/rpc.o 00:15:00.531 CC lib/accel/accel_rpc.o 00:15:00.531 CC lib/accel/accel_sw.o 00:15:00.531 LIB libspdk_init.a 00:15:00.531 CC lib/blob/request.o 00:15:00.791 SO libspdk_init.so.6.0 00:15:00.791 CC lib/blob/zeroes.o 00:15:00.791 CC lib/fsdev/fsdev_io.o 00:15:00.791 SYMLINK libspdk_init.so 00:15:00.791 CC lib/blob/blob_bs_dev.o 00:15:00.791 LIB libspdk_virtio.a 00:15:00.791 SO libspdk_virtio.so.7.0 00:15:01.052 CC lib/fsdev/fsdev_rpc.o 00:15:01.052 SYMLINK libspdk_virtio.so 00:15:01.052 CC lib/event/app.o 00:15:01.052 CC lib/event/reactor.o 00:15:01.052 CC lib/event/log_rpc.o 00:15:01.052 CC lib/event/app_rpc.o 00:15:01.052 LIB libspdk_accel.a 00:15:01.052 CC lib/event/scheduler_static.o 00:15:01.052 SO libspdk_accel.so.16.0 00:15:01.052 SYMLINK libspdk_accel.so 00:15:01.052 LIB libspdk_nvme.a 00:15:01.313 LIB libspdk_fsdev.a 00:15:01.313 SO libspdk_fsdev.so.1.0 00:15:01.313 SYMLINK libspdk_fsdev.so 00:15:01.313 SO libspdk_nvme.so.14.0 00:15:01.572 CC lib/bdev/bdev.o 00:15:01.572 CC lib/bdev/scsi_nvme.o 00:15:01.572 CC lib/bdev/bdev_rpc.o 00:15:01.572 CC lib/bdev/part.o 00:15:01.572 CC lib/bdev/bdev_zone.o 00:15:01.572 LIB libspdk_event.a 00:15:01.572 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:15:01.572 SO libspdk_event.so.14.0 00:15:01.572 SYMLINK libspdk_nvme.so 00:15:01.831 SYMLINK libspdk_event.so 00:15:02.399 LIB libspdk_fuse_dispatcher.a 00:15:02.399 SO libspdk_fuse_dispatcher.so.1.0 00:15:02.399 SYMLINK libspdk_fuse_dispatcher.so 00:15:03.778 LIB libspdk_blob.a 00:15:03.778 SO libspdk_blob.so.11.0 00:15:03.778 SYMLINK libspdk_blob.so 00:15:04.346 CC lib/lvol/lvol.o 00:15:04.346 CC lib/blobfs/blobfs.o 00:15:04.346 CC lib/blobfs/tree.o 00:15:04.606 LIB libspdk_bdev.a 00:15:04.606 SO libspdk_bdev.so.17.0 00:15:04.606 SYMLINK libspdk_bdev.so 00:15:04.865 CC lib/nvmf/ctrlr.o 00:15:04.865 CC lib/nvmf/ctrlr_bdev.o 00:15:04.865 CC lib/nvmf/ctrlr_discovery.o 00:15:04.865 CC lib/nvmf/subsystem.o 00:15:04.865 CC lib/ftl/ftl_core.o 00:15:04.865 CC lib/ublk/ublk.o 00:15:05.124 CC lib/scsi/dev.o 00:15:05.124 CC lib/nbd/nbd.o 00:15:05.384 LIB libspdk_lvol.a 00:15:05.384 CC lib/scsi/lun.o 00:15:05.384 SO libspdk_lvol.so.10.0 00:15:05.384 LIB libspdk_blobfs.a 00:15:05.384 CC lib/ftl/ftl_init.o 00:15:05.384 SYMLINK libspdk_lvol.so 00:15:05.384 CC lib/ftl/ftl_layout.o 00:15:05.384 SO libspdk_blobfs.so.10.0 00:15:05.384 CC lib/nbd/nbd_rpc.o 00:15:05.384 SYMLINK libspdk_blobfs.so 00:15:05.384 CC lib/ftl/ftl_debug.o 00:15:05.642 CC lib/nvmf/nvmf.o 00:15:05.642 CC lib/nvmf/nvmf_rpc.o 00:15:05.642 CC lib/scsi/port.o 00:15:05.642 LIB libspdk_nbd.a 00:15:05.642 SO libspdk_nbd.so.7.0 00:15:05.642 CC lib/ublk/ublk_rpc.o 00:15:05.642 CC lib/nvmf/transport.o 00:15:05.900 SYMLINK libspdk_nbd.so 00:15:05.900 CC lib/scsi/scsi.o 00:15:05.900 CC lib/scsi/scsi_bdev.o 00:15:05.900 CC lib/ftl/ftl_io.o 00:15:05.900 LIB libspdk_ublk.a 00:15:05.900 CC lib/ftl/ftl_sb.o 00:15:05.900 SO libspdk_ublk.so.3.0 00:15:05.900 CC lib/nvmf/tcp.o 00:15:05.900 SYMLINK libspdk_ublk.so 00:15:06.158 CC lib/scsi/scsi_pr.o 00:15:06.158 CC lib/nvmf/stubs.o 00:15:06.158 CC lib/ftl/ftl_l2p.o 00:15:06.417 CC lib/scsi/scsi_rpc.o 00:15:06.417 CC lib/ftl/ftl_l2p_flat.o 00:15:06.417 CC lib/scsi/task.o 00:15:06.417 CC lib/nvmf/mdns_server.o 00:15:06.417 CC lib/nvmf/rdma.o 00:15:06.675 CC lib/ftl/ftl_nv_cache.o 00:15:06.675 CC lib/nvmf/auth.o 00:15:06.676 LIB libspdk_scsi.a 00:15:06.676 CC lib/ftl/ftl_band.o 00:15:06.676 CC lib/ftl/ftl_band_ops.o 00:15:06.676 SO libspdk_scsi.so.9.0 00:15:06.676 CC lib/ftl/ftl_writer.o 00:15:06.676 SYMLINK libspdk_scsi.so 00:15:06.676 CC lib/ftl/ftl_rq.o 00:15:06.934 CC lib/ftl/ftl_reloc.o 00:15:06.934 CC lib/ftl/ftl_l2p_cache.o 00:15:06.934 CC lib/ftl/ftl_p2l.o 00:15:07.192 CC lib/ftl/ftl_p2l_log.o 00:15:07.192 CC lib/ftl/mngt/ftl_mngt.o 00:15:07.192 CC lib/iscsi/conn.o 00:15:07.450 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:15:07.450 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:15:07.450 CC lib/ftl/mngt/ftl_mngt_startup.o 00:15:07.450 CC lib/ftl/mngt/ftl_mngt_md.o 00:15:07.450 CC lib/ftl/mngt/ftl_mngt_misc.o 00:15:07.708 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:15:07.708 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:15:07.708 CC lib/iscsi/init_grp.o 00:15:07.708 CC lib/ftl/mngt/ftl_mngt_band.o 00:15:07.708 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:15:07.708 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:15:07.965 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:15:07.965 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:15:07.965 CC lib/ftl/utils/ftl_conf.o 00:15:07.965 CC lib/iscsi/iscsi.o 00:15:07.965 CC lib/ftl/utils/ftl_md.o 00:15:07.965 CC lib/ftl/utils/ftl_mempool.o 00:15:07.965 CC lib/iscsi/param.o 00:15:07.965 CC lib/iscsi/portal_grp.o 00:15:08.224 CC lib/iscsi/tgt_node.o 00:15:08.224 CC lib/ftl/utils/ftl_bitmap.o 00:15:08.224 CC lib/vhost/vhost.o 00:15:08.224 CC lib/ftl/utils/ftl_property.o 00:15:08.224 CC lib/iscsi/iscsi_subsystem.o 00:15:08.224 CC lib/iscsi/iscsi_rpc.o 00:15:08.482 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:15:08.482 CC lib/vhost/vhost_rpc.o 00:15:08.482 CC lib/vhost/vhost_scsi.o 00:15:08.482 CC lib/vhost/vhost_blk.o 00:15:08.740 CC lib/iscsi/task.o 00:15:08.740 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:15:08.740 CC lib/vhost/rte_vhost_user.o 00:15:08.740 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:15:08.740 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:15:08.740 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:15:08.998 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:15:08.998 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:15:08.998 CC lib/ftl/upgrade/ftl_sb_v3.o 00:15:08.998 CC lib/ftl/upgrade/ftl_sb_v5.o 00:15:08.998 CC lib/ftl/nvc/ftl_nvc_dev.o 00:15:09.256 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:15:09.256 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:15:09.256 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:15:09.256 CC lib/ftl/base/ftl_base_dev.o 00:15:09.514 CC lib/ftl/base/ftl_base_bdev.o 00:15:09.514 LIB libspdk_nvmf.a 00:15:09.514 CC lib/ftl/ftl_trace.o 00:15:09.514 LIB libspdk_iscsi.a 00:15:09.514 SO libspdk_nvmf.so.20.0 00:15:09.772 SO libspdk_iscsi.so.8.0 00:15:09.772 LIB libspdk_ftl.a 00:15:09.772 SYMLINK libspdk_nvmf.so 00:15:09.772 SYMLINK libspdk_iscsi.so 00:15:10.031 LIB libspdk_vhost.a 00:15:10.031 SO libspdk_vhost.so.8.0 00:15:10.031 SO libspdk_ftl.so.9.0 00:15:10.289 SYMLINK libspdk_vhost.so 00:15:10.548 SYMLINK libspdk_ftl.so 00:15:10.807 CC module/env_dpdk/env_dpdk_rpc.o 00:15:10.807 CC module/keyring/file/keyring.o 00:15:10.807 CC module/scheduler/gscheduler/gscheduler.o 00:15:10.807 CC module/sock/posix/posix.o 00:15:10.807 CC module/scheduler/dynamic/scheduler_dynamic.o 00:15:10.807 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:15:10.807 CC module/keyring/linux/keyring.o 00:15:10.807 CC module/accel/error/accel_error.o 00:15:10.807 CC module/blob/bdev/blob_bdev.o 00:15:10.807 CC module/fsdev/aio/fsdev_aio.o 00:15:11.066 LIB libspdk_env_dpdk_rpc.a 00:15:11.066 SO libspdk_env_dpdk_rpc.so.6.0 00:15:11.066 CC module/keyring/file/keyring_rpc.o 00:15:11.066 LIB libspdk_scheduler_gscheduler.a 00:15:11.066 CC module/keyring/linux/keyring_rpc.o 00:15:11.066 SYMLINK libspdk_env_dpdk_rpc.so 00:15:11.066 CC module/fsdev/aio/fsdev_aio_rpc.o 00:15:11.066 LIB libspdk_scheduler_dpdk_governor.a 00:15:11.066 SO libspdk_scheduler_gscheduler.so.4.0 00:15:11.066 CC module/accel/error/accel_error_rpc.o 00:15:11.066 SO libspdk_scheduler_dpdk_governor.so.4.0 00:15:11.066 LIB libspdk_scheduler_dynamic.a 00:15:11.066 SO libspdk_scheduler_dynamic.so.4.0 00:15:11.066 SYMLINK libspdk_scheduler_gscheduler.so 00:15:11.066 SYMLINK libspdk_scheduler_dpdk_governor.so 00:15:11.066 LIB libspdk_keyring_file.a 00:15:11.066 LIB libspdk_keyring_linux.a 00:15:11.066 SYMLINK libspdk_scheduler_dynamic.so 00:15:11.066 CC module/fsdev/aio/linux_aio_mgr.o 00:15:11.325 LIB libspdk_blob_bdev.a 00:15:11.325 SO libspdk_keyring_file.so.2.0 00:15:11.325 SO libspdk_keyring_linux.so.1.0 00:15:11.325 SO libspdk_blob_bdev.so.11.0 00:15:11.325 LIB libspdk_accel_error.a 00:15:11.325 SO libspdk_accel_error.so.2.0 00:15:11.325 SYMLINK libspdk_keyring_file.so 00:15:11.325 SYMLINK libspdk_keyring_linux.so 00:15:11.325 SYMLINK libspdk_blob_bdev.so 00:15:11.325 CC module/accel/ioat/accel_ioat.o 00:15:11.325 CC module/accel/ioat/accel_ioat_rpc.o 00:15:11.325 CC module/accel/dsa/accel_dsa.o 00:15:11.325 SYMLINK libspdk_accel_error.so 00:15:11.325 CC module/accel/dsa/accel_dsa_rpc.o 00:15:11.325 CC module/accel/iaa/accel_iaa.o 00:15:11.325 CC module/accel/iaa/accel_iaa_rpc.o 00:15:11.583 LIB libspdk_accel_ioat.a 00:15:11.583 CC module/bdev/delay/vbdev_delay.o 00:15:11.583 SO libspdk_accel_ioat.so.6.0 00:15:11.583 CC module/blobfs/bdev/blobfs_bdev.o 00:15:11.583 LIB libspdk_accel_iaa.a 00:15:11.583 LIB libspdk_accel_dsa.a 00:15:11.583 SO libspdk_accel_iaa.so.3.0 00:15:11.583 SYMLINK libspdk_accel_ioat.so 00:15:11.583 CC module/bdev/error/vbdev_error.o 00:15:11.583 CC module/bdev/gpt/gpt.o 00:15:11.583 CC module/bdev/error/vbdev_error_rpc.o 00:15:11.583 SO libspdk_accel_dsa.so.5.0 00:15:11.841 LIB libspdk_fsdev_aio.a 00:15:11.841 CC module/bdev/lvol/vbdev_lvol.o 00:15:11.841 SYMLINK libspdk_accel_iaa.so 00:15:11.841 CC module/bdev/delay/vbdev_delay_rpc.o 00:15:11.841 LIB libspdk_sock_posix.a 00:15:11.841 SO libspdk_fsdev_aio.so.1.0 00:15:11.841 SYMLINK libspdk_accel_dsa.so 00:15:11.841 SO libspdk_sock_posix.so.6.0 00:15:11.841 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:15:11.841 SYMLINK libspdk_fsdev_aio.so 00:15:11.841 SYMLINK libspdk_sock_posix.so 00:15:11.841 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:15:11.841 CC module/bdev/gpt/vbdev_gpt.o 00:15:11.841 CC module/bdev/malloc/bdev_malloc.o 00:15:12.100 LIB libspdk_bdev_error.a 00:15:12.100 LIB libspdk_blobfs_bdev.a 00:15:12.100 LIB libspdk_bdev_delay.a 00:15:12.100 CC module/bdev/null/bdev_null.o 00:15:12.100 SO libspdk_bdev_error.so.6.0 00:15:12.100 SO libspdk_blobfs_bdev.so.6.0 00:15:12.100 CC module/bdev/nvme/bdev_nvme.o 00:15:12.100 SO libspdk_bdev_delay.so.6.0 00:15:12.100 CC module/bdev/passthru/vbdev_passthru.o 00:15:12.100 SYMLINK libspdk_bdev_error.so 00:15:12.100 SYMLINK libspdk_blobfs_bdev.so 00:15:12.100 CC module/bdev/nvme/bdev_nvme_rpc.o 00:15:12.100 CC module/bdev/nvme/nvme_rpc.o 00:15:12.100 SYMLINK libspdk_bdev_delay.so 00:15:12.100 CC module/bdev/nvme/bdev_mdns_client.o 00:15:12.100 LIB libspdk_bdev_gpt.a 00:15:12.100 SO libspdk_bdev_gpt.so.6.0 00:15:12.359 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:15:12.359 CC module/bdev/null/bdev_null_rpc.o 00:15:12.359 CC module/bdev/nvme/vbdev_opal.o 00:15:12.359 LIB libspdk_bdev_lvol.a 00:15:12.359 SYMLINK libspdk_bdev_gpt.so 00:15:12.359 CC module/bdev/nvme/vbdev_opal_rpc.o 00:15:12.359 SO libspdk_bdev_lvol.so.6.0 00:15:12.359 CC module/bdev/malloc/bdev_malloc_rpc.o 00:15:12.359 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:15:12.359 SYMLINK libspdk_bdev_lvol.so 00:15:12.359 LIB libspdk_bdev_passthru.a 00:15:12.359 LIB libspdk_bdev_null.a 00:15:12.359 SO libspdk_bdev_passthru.so.6.0 00:15:12.618 SO libspdk_bdev_null.so.6.0 00:15:12.618 CC module/bdev/raid/bdev_raid.o 00:15:12.618 CC module/bdev/raid/bdev_raid_rpc.o 00:15:12.618 LIB libspdk_bdev_malloc.a 00:15:12.618 SYMLINK libspdk_bdev_passthru.so 00:15:12.618 CC module/bdev/split/vbdev_split.o 00:15:12.618 CC module/bdev/split/vbdev_split_rpc.o 00:15:12.618 CC module/bdev/raid/bdev_raid_sb.o 00:15:12.618 SYMLINK libspdk_bdev_null.so 00:15:12.618 SO libspdk_bdev_malloc.so.6.0 00:15:12.618 SYMLINK libspdk_bdev_malloc.so 00:15:12.618 CC module/bdev/zone_block/vbdev_zone_block.o 00:15:12.618 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:15:12.877 CC module/bdev/raid/raid0.o 00:15:12.877 CC module/bdev/xnvme/bdev_xnvme.o 00:15:12.877 LIB libspdk_bdev_split.a 00:15:12.877 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:15:12.877 CC module/bdev/aio/bdev_aio.o 00:15:12.877 SO libspdk_bdev_split.so.6.0 00:15:12.877 SYMLINK libspdk_bdev_split.so 00:15:12.877 CC module/bdev/raid/raid1.o 00:15:13.135 CC module/bdev/raid/concat.o 00:15:13.135 CC module/bdev/ftl/bdev_ftl.o 00:15:13.135 LIB libspdk_bdev_xnvme.a 00:15:13.135 LIB libspdk_bdev_zone_block.a 00:15:13.135 CC module/bdev/iscsi/bdev_iscsi.o 00:15:13.135 SO libspdk_bdev_xnvme.so.3.0 00:15:13.135 CC module/bdev/virtio/bdev_virtio_scsi.o 00:15:13.135 SO libspdk_bdev_zone_block.so.6.0 00:15:13.135 CC module/bdev/aio/bdev_aio_rpc.o 00:15:13.135 SYMLINK libspdk_bdev_xnvme.so 00:15:13.135 SYMLINK libspdk_bdev_zone_block.so 00:15:13.135 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:15:13.135 CC module/bdev/virtio/bdev_virtio_blk.o 00:15:13.394 CC module/bdev/virtio/bdev_virtio_rpc.o 00:15:13.394 CC module/bdev/ftl/bdev_ftl_rpc.o 00:15:13.394 LIB libspdk_bdev_aio.a 00:15:13.394 SO libspdk_bdev_aio.so.6.0 00:15:13.395 SYMLINK libspdk_bdev_aio.so 00:15:13.653 LIB libspdk_bdev_ftl.a 00:15:13.653 LIB libspdk_bdev_iscsi.a 00:15:13.653 SO libspdk_bdev_ftl.so.6.0 00:15:13.653 SO libspdk_bdev_iscsi.so.6.0 00:15:13.653 LIB libspdk_bdev_raid.a 00:15:13.653 SYMLINK libspdk_bdev_iscsi.so 00:15:13.653 SYMLINK libspdk_bdev_ftl.so 00:15:13.653 LIB libspdk_bdev_virtio.a 00:15:13.653 SO libspdk_bdev_raid.so.6.0 00:15:13.653 SO libspdk_bdev_virtio.so.6.0 00:15:13.913 SYMLINK libspdk_bdev_raid.so 00:15:13.913 SYMLINK libspdk_bdev_virtio.so 00:15:14.516 LIB libspdk_bdev_nvme.a 00:15:14.775 SO libspdk_bdev_nvme.so.7.0 00:15:14.775 SYMLINK libspdk_bdev_nvme.so 00:15:15.343 CC module/event/subsystems/keyring/keyring.o 00:15:15.343 CC module/event/subsystems/fsdev/fsdev.o 00:15:15.343 CC module/event/subsystems/vmd/vmd.o 00:15:15.343 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:15:15.343 CC module/event/subsystems/vmd/vmd_rpc.o 00:15:15.343 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:15:15.343 CC module/event/subsystems/iobuf/iobuf.o 00:15:15.602 CC module/event/subsystems/sock/sock.o 00:15:15.602 CC module/event/subsystems/scheduler/scheduler.o 00:15:15.602 LIB libspdk_event_keyring.a 00:15:15.602 LIB libspdk_event_vhost_blk.a 00:15:15.602 LIB libspdk_event_vmd.a 00:15:15.602 LIB libspdk_event_scheduler.a 00:15:15.602 SO libspdk_event_keyring.so.1.0 00:15:15.602 SO libspdk_event_vhost_blk.so.3.0 00:15:15.602 LIB libspdk_event_fsdev.a 00:15:15.602 SO libspdk_event_vmd.so.6.0 00:15:15.602 LIB libspdk_event_iobuf.a 00:15:15.602 SO libspdk_event_scheduler.so.4.0 00:15:15.602 SO libspdk_event_fsdev.so.1.0 00:15:15.602 LIB libspdk_event_sock.a 00:15:15.602 SO libspdk_event_iobuf.so.3.0 00:15:15.602 SYMLINK libspdk_event_keyring.so 00:15:15.602 SYMLINK libspdk_event_scheduler.so 00:15:15.602 SYMLINK libspdk_event_vmd.so 00:15:15.602 SO libspdk_event_sock.so.5.0 00:15:15.602 SYMLINK libspdk_event_vhost_blk.so 00:15:15.862 SYMLINK libspdk_event_iobuf.so 00:15:15.862 SYMLINK libspdk_event_fsdev.so 00:15:15.862 SYMLINK libspdk_event_sock.so 00:15:16.121 CC module/event/subsystems/accel/accel.o 00:15:16.380 LIB libspdk_event_accel.a 00:15:16.380 SO libspdk_event_accel.so.6.0 00:15:16.380 SYMLINK libspdk_event_accel.so 00:15:16.946 CC module/event/subsystems/bdev/bdev.o 00:15:16.946 LIB libspdk_event_bdev.a 00:15:16.946 SO libspdk_event_bdev.so.6.0 00:15:17.205 SYMLINK libspdk_event_bdev.so 00:15:17.463 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:15:17.463 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:15:17.463 CC module/event/subsystems/scsi/scsi.o 00:15:17.463 CC module/event/subsystems/ublk/ublk.o 00:15:17.463 CC module/event/subsystems/nbd/nbd.o 00:15:17.723 LIB libspdk_event_scsi.a 00:15:17.723 LIB libspdk_event_ublk.a 00:15:17.723 LIB libspdk_event_nbd.a 00:15:17.723 SO libspdk_event_scsi.so.6.0 00:15:17.723 SO libspdk_event_ublk.so.3.0 00:15:17.723 LIB libspdk_event_nvmf.a 00:15:17.723 SO libspdk_event_nbd.so.6.0 00:15:17.723 SYMLINK libspdk_event_scsi.so 00:15:17.723 SO libspdk_event_nvmf.so.6.0 00:15:17.723 SYMLINK libspdk_event_ublk.so 00:15:17.723 SYMLINK libspdk_event_nbd.so 00:15:17.723 SYMLINK libspdk_event_nvmf.so 00:15:17.982 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:15:17.982 CC module/event/subsystems/iscsi/iscsi.o 00:15:18.241 LIB libspdk_event_vhost_scsi.a 00:15:18.241 LIB libspdk_event_iscsi.a 00:15:18.241 SO libspdk_event_vhost_scsi.so.3.0 00:15:18.241 SO libspdk_event_iscsi.so.6.0 00:15:18.241 SYMLINK libspdk_event_vhost_scsi.so 00:15:18.499 SYMLINK libspdk_event_iscsi.so 00:15:18.499 SO libspdk.so.6.0 00:15:18.499 SYMLINK libspdk.so 00:15:19.067 CC app/trace_record/trace_record.o 00:15:19.067 CXX app/trace/trace.o 00:15:19.067 CC app/spdk_nvme_perf/perf.o 00:15:19.067 CC app/nvmf_tgt/nvmf_main.o 00:15:19.067 CC app/spdk_lspci/spdk_lspci.o 00:15:19.067 CC app/iscsi_tgt/iscsi_tgt.o 00:15:19.067 CC app/spdk_tgt/spdk_tgt.o 00:15:19.067 CC examples/ioat/perf/perf.o 00:15:19.067 CC test/thread/poller_perf/poller_perf.o 00:15:19.067 CC examples/util/zipf/zipf.o 00:15:19.067 LINK spdk_lspci 00:15:19.067 LINK nvmf_tgt 00:15:19.067 LINK poller_perf 00:15:19.067 LINK zipf 00:15:19.067 LINK spdk_trace_record 00:15:19.327 LINK spdk_tgt 00:15:19.327 LINK iscsi_tgt 00:15:19.327 LINK ioat_perf 00:15:19.327 CC app/spdk_nvme_identify/identify.o 00:15:19.327 LINK spdk_trace 00:15:19.327 TEST_HEADER include/spdk/accel.h 00:15:19.327 TEST_HEADER include/spdk/accel_module.h 00:15:19.327 TEST_HEADER include/spdk/assert.h 00:15:19.327 TEST_HEADER include/spdk/barrier.h 00:15:19.327 TEST_HEADER include/spdk/base64.h 00:15:19.327 TEST_HEADER include/spdk/bdev.h 00:15:19.327 TEST_HEADER include/spdk/bdev_module.h 00:15:19.327 TEST_HEADER include/spdk/bdev_zone.h 00:15:19.327 TEST_HEADER include/spdk/bit_array.h 00:15:19.587 TEST_HEADER include/spdk/bit_pool.h 00:15:19.587 TEST_HEADER include/spdk/blob_bdev.h 00:15:19.587 TEST_HEADER include/spdk/blobfs_bdev.h 00:15:19.587 TEST_HEADER include/spdk/blobfs.h 00:15:19.587 TEST_HEADER include/spdk/blob.h 00:15:19.587 TEST_HEADER include/spdk/conf.h 00:15:19.587 TEST_HEADER include/spdk/config.h 00:15:19.587 TEST_HEADER include/spdk/cpuset.h 00:15:19.587 TEST_HEADER include/spdk/crc16.h 00:15:19.587 TEST_HEADER include/spdk/crc32.h 00:15:19.587 TEST_HEADER include/spdk/crc64.h 00:15:19.587 TEST_HEADER include/spdk/dif.h 00:15:19.587 TEST_HEADER include/spdk/dma.h 00:15:19.587 TEST_HEADER include/spdk/endian.h 00:15:19.587 TEST_HEADER include/spdk/env_dpdk.h 00:15:19.587 TEST_HEADER include/spdk/env.h 00:15:19.587 TEST_HEADER include/spdk/event.h 00:15:19.587 TEST_HEADER include/spdk/fd_group.h 00:15:19.587 TEST_HEADER include/spdk/fd.h 00:15:19.587 TEST_HEADER include/spdk/file.h 00:15:19.587 TEST_HEADER include/spdk/fsdev.h 00:15:19.587 TEST_HEADER include/spdk/fsdev_module.h 00:15:19.587 TEST_HEADER include/spdk/ftl.h 00:15:19.587 TEST_HEADER include/spdk/fuse_dispatcher.h 00:15:19.587 CC examples/ioat/verify/verify.o 00:15:19.587 CC app/spdk_nvme_discover/discovery_aer.o 00:15:19.587 TEST_HEADER include/spdk/gpt_spec.h 00:15:19.587 TEST_HEADER include/spdk/hexlify.h 00:15:19.587 TEST_HEADER include/spdk/histogram_data.h 00:15:19.587 TEST_HEADER include/spdk/idxd.h 00:15:19.587 TEST_HEADER include/spdk/idxd_spec.h 00:15:19.587 TEST_HEADER include/spdk/init.h 00:15:19.587 TEST_HEADER include/spdk/ioat.h 00:15:19.587 TEST_HEADER include/spdk/ioat_spec.h 00:15:19.587 TEST_HEADER include/spdk/iscsi_spec.h 00:15:19.587 TEST_HEADER include/spdk/json.h 00:15:19.587 TEST_HEADER include/spdk/jsonrpc.h 00:15:19.587 TEST_HEADER include/spdk/keyring.h 00:15:19.587 TEST_HEADER include/spdk/keyring_module.h 00:15:19.587 TEST_HEADER include/spdk/likely.h 00:15:19.587 TEST_HEADER include/spdk/log.h 00:15:19.587 TEST_HEADER include/spdk/lvol.h 00:15:19.587 TEST_HEADER include/spdk/md5.h 00:15:19.587 TEST_HEADER include/spdk/memory.h 00:15:19.587 TEST_HEADER include/spdk/mmio.h 00:15:19.587 TEST_HEADER include/spdk/nbd.h 00:15:19.587 TEST_HEADER include/spdk/net.h 00:15:19.587 TEST_HEADER include/spdk/notify.h 00:15:19.587 CC test/app/bdev_svc/bdev_svc.o 00:15:19.587 TEST_HEADER include/spdk/nvme.h 00:15:19.587 TEST_HEADER include/spdk/nvme_intel.h 00:15:19.587 CC test/dma/test_dma/test_dma.o 00:15:19.587 TEST_HEADER include/spdk/nvme_ocssd.h 00:15:19.587 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:15:19.587 TEST_HEADER include/spdk/nvme_spec.h 00:15:19.587 TEST_HEADER include/spdk/nvme_zns.h 00:15:19.587 TEST_HEADER include/spdk/nvmf_cmd.h 00:15:19.587 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:15:19.587 TEST_HEADER include/spdk/nvmf.h 00:15:19.587 TEST_HEADER include/spdk/nvmf_spec.h 00:15:19.587 TEST_HEADER include/spdk/nvmf_transport.h 00:15:19.587 TEST_HEADER include/spdk/opal.h 00:15:19.587 TEST_HEADER include/spdk/opal_spec.h 00:15:19.587 TEST_HEADER include/spdk/pci_ids.h 00:15:19.587 TEST_HEADER include/spdk/pipe.h 00:15:19.587 TEST_HEADER include/spdk/queue.h 00:15:19.587 TEST_HEADER include/spdk/reduce.h 00:15:19.587 TEST_HEADER include/spdk/rpc.h 00:15:19.587 TEST_HEADER include/spdk/scheduler.h 00:15:19.587 CC test/event/event_perf/event_perf.o 00:15:19.587 TEST_HEADER include/spdk/scsi.h 00:15:19.587 TEST_HEADER include/spdk/scsi_spec.h 00:15:19.587 TEST_HEADER include/spdk/sock.h 00:15:19.587 TEST_HEADER include/spdk/stdinc.h 00:15:19.587 TEST_HEADER include/spdk/string.h 00:15:19.587 TEST_HEADER include/spdk/thread.h 00:15:19.587 TEST_HEADER include/spdk/trace.h 00:15:19.587 TEST_HEADER include/spdk/trace_parser.h 00:15:19.587 TEST_HEADER include/spdk/tree.h 00:15:19.587 TEST_HEADER include/spdk/ublk.h 00:15:19.587 TEST_HEADER include/spdk/util.h 00:15:19.587 TEST_HEADER include/spdk/uuid.h 00:15:19.587 TEST_HEADER include/spdk/version.h 00:15:19.587 TEST_HEADER include/spdk/vfio_user_pci.h 00:15:19.587 TEST_HEADER include/spdk/vfio_user_spec.h 00:15:19.587 CC test/env/mem_callbacks/mem_callbacks.o 00:15:19.587 TEST_HEADER include/spdk/vhost.h 00:15:19.587 TEST_HEADER include/spdk/vmd.h 00:15:19.587 TEST_HEADER include/spdk/xor.h 00:15:19.587 TEST_HEADER include/spdk/zipf.h 00:15:19.587 CXX test/cpp_headers/accel.o 00:15:19.587 CC app/spdk_top/spdk_top.o 00:15:19.846 LINK spdk_nvme_discover 00:15:19.846 LINK bdev_svc 00:15:19.846 LINK verify 00:15:19.846 LINK event_perf 00:15:19.846 CXX test/cpp_headers/accel_module.o 00:15:19.846 CXX test/cpp_headers/assert.o 00:15:19.846 LINK spdk_nvme_perf 00:15:20.106 CC examples/interrupt_tgt/interrupt_tgt.o 00:15:20.106 CC test/event/reactor/reactor.o 00:15:20.106 CXX test/cpp_headers/barrier.o 00:15:20.106 LINK test_dma 00:15:20.106 CC app/vhost/vhost.o 00:15:20.106 LINK reactor 00:15:20.106 LINK mem_callbacks 00:15:20.106 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:15:20.365 LINK interrupt_tgt 00:15:20.365 CC app/spdk_dd/spdk_dd.o 00:15:20.365 CXX test/cpp_headers/base64.o 00:15:20.365 LINK vhost 00:15:20.365 LINK spdk_nvme_identify 00:15:20.365 CC test/env/vtophys/vtophys.o 00:15:20.365 CC test/event/reactor_perf/reactor_perf.o 00:15:20.365 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:15:20.365 CXX test/cpp_headers/bdev.o 00:15:20.625 LINK reactor_perf 00:15:20.625 LINK vtophys 00:15:20.625 CC examples/thread/thread/thread_ex.o 00:15:20.625 CXX test/cpp_headers/bdev_module.o 00:15:20.625 LINK nvme_fuzz 00:15:20.625 CC test/event/app_repeat/app_repeat.o 00:15:20.625 LINK spdk_dd 00:15:20.625 LINK spdk_top 00:15:20.625 CC examples/sock/hello_world/hello_sock.o 00:15:20.883 CC test/rpc_client/rpc_client_test.o 00:15:20.883 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:15:20.883 LINK app_repeat 00:15:20.883 CXX test/cpp_headers/bdev_zone.o 00:15:20.883 LINK thread 00:15:20.883 LINK env_dpdk_post_init 00:15:20.883 LINK hello_sock 00:15:20.883 LINK rpc_client_test 00:15:21.142 CC test/accel/dif/dif.o 00:15:21.142 CC test/blobfs/mkfs/mkfs.o 00:15:21.142 CXX test/cpp_headers/bit_array.o 00:15:21.142 CC app/fio/nvme/fio_plugin.o 00:15:21.142 CC test/event/scheduler/scheduler.o 00:15:21.142 CXX test/cpp_headers/bit_pool.o 00:15:21.142 CC test/env/memory/memory_ut.o 00:15:21.142 LINK mkfs 00:15:21.142 CC app/fio/bdev/fio_plugin.o 00:15:21.400 CC examples/vmd/lsvmd/lsvmd.o 00:15:21.400 CC test/lvol/esnap/esnap.o 00:15:21.400 LINK scheduler 00:15:21.400 CXX test/cpp_headers/blob_bdev.o 00:15:21.400 LINK lsvmd 00:15:21.708 CXX test/cpp_headers/blobfs_bdev.o 00:15:21.708 CC test/nvme/aer/aer.o 00:15:21.708 LINK spdk_nvme 00:15:21.708 CC examples/idxd/perf/perf.o 00:15:21.708 CC examples/vmd/led/led.o 00:15:21.708 CXX test/cpp_headers/blobfs.o 00:15:21.708 LINK spdk_bdev 00:15:21.708 LINK dif 00:15:21.978 LINK led 00:15:21.978 LINK aer 00:15:21.978 CXX test/cpp_headers/blob.o 00:15:21.978 CC examples/fsdev/hello_world/hello_fsdev.o 00:15:21.978 CXX test/cpp_headers/conf.o 00:15:21.978 LINK idxd_perf 00:15:21.978 CC examples/accel/perf/accel_perf.o 00:15:22.237 CC test/nvme/reset/reset.o 00:15:22.237 CC examples/blob/hello_world/hello_blob.o 00:15:22.237 LINK hello_fsdev 00:15:22.237 CXX test/cpp_headers/config.o 00:15:22.237 CXX test/cpp_headers/cpuset.o 00:15:22.237 CC test/bdev/bdevio/bdevio.o 00:15:22.237 LINK iscsi_fuzz 00:15:22.237 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:15:22.497 CXX test/cpp_headers/crc16.o 00:15:22.497 LINK memory_ut 00:15:22.497 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:15:22.497 LINK reset 00:15:22.497 LINK hello_blob 00:15:22.497 CXX test/cpp_headers/crc32.o 00:15:22.497 CC test/env/pci/pci_ut.o 00:15:22.497 CC test/nvme/sgl/sgl.o 00:15:22.497 LINK bdevio 00:15:22.756 LINK accel_perf 00:15:22.756 CC test/nvme/e2edp/nvme_dp.o 00:15:22.756 CXX test/cpp_headers/crc64.o 00:15:22.756 CC test/nvme/overhead/overhead.o 00:15:22.756 CC examples/blob/cli/blobcli.o 00:15:22.756 CXX test/cpp_headers/dif.o 00:15:22.756 LINK vhost_fuzz 00:15:23.015 LINK sgl 00:15:23.015 CC test/nvme/err_injection/err_injection.o 00:15:23.015 LINK nvme_dp 00:15:23.015 CC examples/nvme/hello_world/hello_world.o 00:15:23.015 LINK pci_ut 00:15:23.015 CXX test/cpp_headers/dma.o 00:15:23.015 LINK overhead 00:15:23.015 CXX test/cpp_headers/endian.o 00:15:23.015 CC test/app/histogram_perf/histogram_perf.o 00:15:23.015 LINK err_injection 00:15:23.015 CXX test/cpp_headers/env_dpdk.o 00:15:23.274 LINK hello_world 00:15:23.274 CXX test/cpp_headers/env.o 00:15:23.274 LINK histogram_perf 00:15:23.274 CC examples/nvme/reconnect/reconnect.o 00:15:23.274 CC test/app/jsoncat/jsoncat.o 00:15:23.274 LINK blobcli 00:15:23.274 CC test/app/stub/stub.o 00:15:23.274 CC test/nvme/startup/startup.o 00:15:23.274 CC examples/nvme/nvme_manage/nvme_manage.o 00:15:23.533 CXX test/cpp_headers/event.o 00:15:23.533 CC examples/nvme/arbitration/arbitration.o 00:15:23.533 LINK jsoncat 00:15:23.533 CC examples/nvme/hotplug/hotplug.o 00:15:23.533 LINK stub 00:15:23.533 LINK startup 00:15:23.533 CXX test/cpp_headers/fd_group.o 00:15:23.533 LINK reconnect 00:15:23.792 CC examples/nvme/cmb_copy/cmb_copy.o 00:15:23.792 LINK hotplug 00:15:23.792 CC examples/bdev/hello_world/hello_bdev.o 00:15:23.792 CXX test/cpp_headers/fd.o 00:15:23.792 CC test/nvme/reserve/reserve.o 00:15:23.792 LINK arbitration 00:15:23.792 CC examples/bdev/bdevperf/bdevperf.o 00:15:23.792 LINK cmb_copy 00:15:23.792 CC examples/nvme/abort/abort.o 00:15:24.051 CXX test/cpp_headers/file.o 00:15:24.051 LINK nvme_manage 00:15:24.051 LINK hello_bdev 00:15:24.051 CXX test/cpp_headers/fsdev.o 00:15:24.051 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:15:24.051 LINK reserve 00:15:24.051 CXX test/cpp_headers/fsdev_module.o 00:15:24.051 CC test/nvme/simple_copy/simple_copy.o 00:15:24.310 LINK pmr_persistence 00:15:24.310 CXX test/cpp_headers/ftl.o 00:15:24.310 CC test/nvme/boot_partition/boot_partition.o 00:15:24.310 CC test/nvme/connect_stress/connect_stress.o 00:15:24.310 CC test/nvme/compliance/nvme_compliance.o 00:15:24.310 CXX test/cpp_headers/fuse_dispatcher.o 00:15:24.310 LINK abort 00:15:24.310 LINK simple_copy 00:15:24.569 LINK boot_partition 00:15:24.569 CC test/nvme/fused_ordering/fused_ordering.o 00:15:24.569 LINK connect_stress 00:15:24.569 CXX test/cpp_headers/gpt_spec.o 00:15:24.569 CC test/nvme/doorbell_aers/doorbell_aers.o 00:15:24.569 CXX test/cpp_headers/hexlify.o 00:15:24.569 CXX test/cpp_headers/histogram_data.o 00:15:24.569 LINK fused_ordering 00:15:24.569 LINK nvme_compliance 00:15:24.569 CXX test/cpp_headers/idxd.o 00:15:24.569 CXX test/cpp_headers/idxd_spec.o 00:15:24.828 CC test/nvme/cuse/cuse.o 00:15:24.828 CC test/nvme/fdp/fdp.o 00:15:24.828 LINK doorbell_aers 00:15:24.828 LINK bdevperf 00:15:24.828 CXX test/cpp_headers/init.o 00:15:24.828 CXX test/cpp_headers/ioat.o 00:15:24.828 CXX test/cpp_headers/ioat_spec.o 00:15:24.828 CXX test/cpp_headers/iscsi_spec.o 00:15:24.828 CXX test/cpp_headers/json.o 00:15:24.828 CXX test/cpp_headers/jsonrpc.o 00:15:25.104 CXX test/cpp_headers/keyring.o 00:15:25.104 CXX test/cpp_headers/keyring_module.o 00:15:25.104 CXX test/cpp_headers/likely.o 00:15:25.104 CXX test/cpp_headers/log.o 00:15:25.104 CXX test/cpp_headers/lvol.o 00:15:25.104 CXX test/cpp_headers/md5.o 00:15:25.104 LINK fdp 00:15:25.104 CXX test/cpp_headers/memory.o 00:15:25.104 CXX test/cpp_headers/mmio.o 00:15:25.104 CXX test/cpp_headers/nbd.o 00:15:25.104 CXX test/cpp_headers/net.o 00:15:25.104 CXX test/cpp_headers/notify.o 00:15:25.362 CC examples/nvmf/nvmf/nvmf.o 00:15:25.362 CXX test/cpp_headers/nvme.o 00:15:25.362 CXX test/cpp_headers/nvme_intel.o 00:15:25.362 CXX test/cpp_headers/nvme_ocssd.o 00:15:25.362 CXX test/cpp_headers/nvme_ocssd_spec.o 00:15:25.362 CXX test/cpp_headers/nvme_spec.o 00:15:25.362 CXX test/cpp_headers/nvme_zns.o 00:15:25.362 CXX test/cpp_headers/nvmf_cmd.o 00:15:25.362 CXX test/cpp_headers/nvmf_fc_spec.o 00:15:25.620 CXX test/cpp_headers/nvmf.o 00:15:25.620 CXX test/cpp_headers/nvmf_spec.o 00:15:25.620 CXX test/cpp_headers/nvmf_transport.o 00:15:25.620 CXX test/cpp_headers/opal.o 00:15:25.620 LINK nvmf 00:15:25.620 CXX test/cpp_headers/opal_spec.o 00:15:25.620 CXX test/cpp_headers/pci_ids.o 00:15:25.620 CXX test/cpp_headers/pipe.o 00:15:25.878 CXX test/cpp_headers/queue.o 00:15:25.878 CXX test/cpp_headers/reduce.o 00:15:25.878 CXX test/cpp_headers/rpc.o 00:15:25.878 CXX test/cpp_headers/scheduler.o 00:15:25.878 CXX test/cpp_headers/scsi.o 00:15:25.878 CXX test/cpp_headers/scsi_spec.o 00:15:25.878 CXX test/cpp_headers/sock.o 00:15:25.878 CXX test/cpp_headers/stdinc.o 00:15:25.878 CXX test/cpp_headers/string.o 00:15:25.878 CXX test/cpp_headers/thread.o 00:15:25.878 CXX test/cpp_headers/trace.o 00:15:25.878 CXX test/cpp_headers/trace_parser.o 00:15:25.878 CXX test/cpp_headers/tree.o 00:15:25.878 CXX test/cpp_headers/ublk.o 00:15:25.878 CXX test/cpp_headers/util.o 00:15:25.878 CXX test/cpp_headers/uuid.o 00:15:26.138 CXX test/cpp_headers/version.o 00:15:26.138 CXX test/cpp_headers/vfio_user_pci.o 00:15:26.138 CXX test/cpp_headers/vfio_user_spec.o 00:15:26.138 CXX test/cpp_headers/vhost.o 00:15:26.138 CXX test/cpp_headers/vmd.o 00:15:26.138 CXX test/cpp_headers/xor.o 00:15:26.138 CXX test/cpp_headers/zipf.o 00:15:26.138 LINK cuse 00:15:27.514 LINK esnap 00:15:28.082 00:15:28.082 real 1m25.221s 00:15:28.082 user 7m32.985s 00:15:28.082 sys 1m52.887s 00:15:28.082 16:30:04 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:15:28.082 ************************************ 00:15:28.082 END TEST make 00:15:28.082 ************************************ 00:15:28.082 16:30:04 make -- common/autotest_common.sh@10 -- $ set +x 00:15:28.082 16:30:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:15:28.082 16:30:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:15:28.082 16:30:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:15:28.082 16:30:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:28.082 16:30:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:15:28.082 16:30:04 -- pm/common@44 -- $ pid=5273 00:15:28.082 16:30:04 -- pm/common@50 -- $ kill -TERM 5273 00:15:28.082 16:30:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:28.082 16:30:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:15:28.082 16:30:04 -- pm/common@44 -- $ pid=5275 00:15:28.082 16:30:04 -- pm/common@50 -- $ kill -TERM 5275 00:15:28.082 16:30:04 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:28.082 16:30:04 -- common/autotest_common.sh@1691 -- # lcov --version 00:15:28.082 16:30:04 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:28.341 16:30:04 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:28.341 16:30:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.341 16:30:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.341 16:30:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.341 16:30:04 -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.341 16:30:04 -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.341 16:30:04 -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.341 16:30:04 -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.341 16:30:04 -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.341 16:30:04 -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.341 16:30:04 -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.341 16:30:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.341 16:30:04 -- scripts/common.sh@344 -- # case "$op" in 00:15:28.341 16:30:04 -- scripts/common.sh@345 -- # : 1 00:15:28.341 16:30:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.341 16:30:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.341 16:30:04 -- scripts/common.sh@365 -- # decimal 1 00:15:28.341 16:30:04 -- scripts/common.sh@353 -- # local d=1 00:15:28.341 16:30:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.341 16:30:04 -- scripts/common.sh@355 -- # echo 1 00:15:28.341 16:30:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.341 16:30:04 -- scripts/common.sh@366 -- # decimal 2 00:15:28.341 16:30:04 -- scripts/common.sh@353 -- # local d=2 00:15:28.341 16:30:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.341 16:30:04 -- scripts/common.sh@355 -- # echo 2 00:15:28.341 16:30:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.341 16:30:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.341 16:30:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.341 16:30:04 -- scripts/common.sh@368 -- # return 0 00:15:28.341 16:30:04 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.341 16:30:04 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:28.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.341 --rc genhtml_branch_coverage=1 00:15:28.341 --rc genhtml_function_coverage=1 00:15:28.341 --rc genhtml_legend=1 00:15:28.341 --rc geninfo_all_blocks=1 00:15:28.341 --rc geninfo_unexecuted_blocks=1 00:15:28.341 00:15:28.341 ' 00:15:28.341 16:30:04 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:28.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.341 --rc genhtml_branch_coverage=1 00:15:28.341 --rc genhtml_function_coverage=1 00:15:28.341 --rc genhtml_legend=1 00:15:28.341 --rc geninfo_all_blocks=1 00:15:28.341 --rc geninfo_unexecuted_blocks=1 00:15:28.341 00:15:28.341 ' 00:15:28.341 16:30:04 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:28.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.341 --rc genhtml_branch_coverage=1 00:15:28.341 --rc genhtml_function_coverage=1 00:15:28.341 --rc genhtml_legend=1 00:15:28.341 --rc geninfo_all_blocks=1 00:15:28.341 --rc geninfo_unexecuted_blocks=1 00:15:28.341 00:15:28.341 ' 00:15:28.341 16:30:04 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:28.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.341 --rc genhtml_branch_coverage=1 00:15:28.341 --rc genhtml_function_coverage=1 00:15:28.341 --rc genhtml_legend=1 00:15:28.341 --rc geninfo_all_blocks=1 00:15:28.341 --rc geninfo_unexecuted_blocks=1 00:15:28.341 00:15:28.341 ' 00:15:28.341 16:30:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:28.341 16:30:04 -- nvmf/common.sh@7 -- # uname -s 00:15:28.341 16:30:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.341 16:30:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.341 16:30:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.341 16:30:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.341 16:30:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.341 16:30:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.341 16:30:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.341 16:30:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.341 16:30:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.341 16:30:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.341 16:30:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ba9c5064-9e78-405f-b6ca-bee3ef04967c 00:15:28.341 16:30:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=ba9c5064-9e78-405f-b6ca-bee3ef04967c 00:15:28.341 16:30:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.341 16:30:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.341 16:30:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:28.341 16:30:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.341 16:30:04 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.341 16:30:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.341 16:30:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.341 16:30:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.341 16:30:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.341 16:30:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.341 16:30:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.341 16:30:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.341 16:30:04 -- paths/export.sh@5 -- # export PATH 00:15:28.341 16:30:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.341 16:30:04 -- nvmf/common.sh@51 -- # : 0 00:15:28.341 16:30:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:28.341 16:30:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:28.342 16:30:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.342 16:30:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.342 16:30:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.342 16:30:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:28.342 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:28.342 16:30:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:28.342 16:30:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:28.342 16:30:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:28.342 16:30:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:15:28.342 16:30:04 -- spdk/autotest.sh@32 -- # uname -s 00:15:28.342 16:30:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:15:28.342 16:30:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:15:28.342 16:30:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:15:28.342 16:30:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:15:28.342 16:30:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:15:28.342 16:30:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:15:28.342 16:30:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:15:28.342 16:30:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:15:28.342 16:30:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:15:28.342 16:30:04 -- spdk/autotest.sh@48 -- # udevadm_pid=54745 00:15:28.342 16:30:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:15:28.342 16:30:04 -- pm/common@17 -- # local monitor 00:15:28.342 16:30:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:15:28.342 16:30:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:15:28.342 16:30:04 -- pm/common@21 -- # date +%s 00:15:28.342 16:30:04 -- pm/common@25 -- # sleep 1 00:15:28.342 16:30:04 -- pm/common@21 -- # date +%s 00:15:28.342 16:30:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729182604 00:15:28.342 16:30:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1729182604 00:15:28.342 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729182604_collect-cpu-load.pm.log 00:15:28.342 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1729182604_collect-vmstat.pm.log 00:15:29.278 16:30:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:15:29.278 16:30:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:15:29.278 16:30:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.278 16:30:05 -- common/autotest_common.sh@10 -- # set +x 00:15:29.278 16:30:05 -- spdk/autotest.sh@59 -- # create_test_list 00:15:29.278 16:30:05 -- common/autotest_common.sh@748 -- # xtrace_disable 00:15:29.278 16:30:05 -- common/autotest_common.sh@10 -- # set +x 00:15:29.536 16:30:05 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:15:29.536 16:30:05 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:15:29.536 16:30:05 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:15:29.536 16:30:05 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:15:29.537 16:30:05 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:15:29.537 16:30:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:15:29.537 16:30:05 -- common/autotest_common.sh@1455 -- # uname 00:15:29.537 16:30:05 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:15:29.537 16:30:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:15:29.537 16:30:05 -- common/autotest_common.sh@1475 -- # uname 00:15:29.537 16:30:05 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:15:29.537 16:30:05 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:15:29.537 16:30:05 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:15:29.537 lcov: LCOV version 1.15 00:15:29.537 16:30:05 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:15:44.476 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:15:44.476 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:16:02.555 16:30:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:16:02.555 16:30:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:02.555 16:30:37 -- common/autotest_common.sh@10 -- # set +x 00:16:02.555 16:30:37 -- spdk/autotest.sh@78 -- # rm -f 00:16:02.555 16:30:37 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:02.555 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:03.123 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:03.123 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:03.123 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:16:03.123 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:16:03.123 16:30:39 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:16:03.123 16:30:39 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:16:03.123 16:30:39 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:16:03.123 16:30:39 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:16:03.123 16:30:39 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:03.123 16:30:39 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:16:03.123 16:30:39 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:03.123 16:30:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:03.123 16:30:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:03.123 16:30:39 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:03.123 16:30:39 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:16:03.123 16:30:39 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:03.123 16:30:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:03.123 16:30:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:03.123 16:30:39 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:03.123 16:30:39 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:16:03.123 16:30:39 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:16:03.123 16:30:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:03.123 16:30:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:03.123 16:30:39 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:03.123 16:30:39 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:16:03.123 16:30:39 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:16:03.123 16:30:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:16:03.123 16:30:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:03.123 16:30:39 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:03.123 16:30:39 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:16:03.123 16:30:39 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:16:03.123 16:30:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:16:03.123 16:30:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:03.123 16:30:39 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:03.123 16:30:39 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:16:03.123 16:30:39 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:16:03.123 16:30:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:16:03.123 16:30:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:03.123 16:30:39 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:03.123 16:30:39 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:16:03.123 16:30:39 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:16:03.123 16:30:39 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:03.123 16:30:39 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:03.123 16:30:39 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:16:03.123 16:30:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:03.123 16:30:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:03.123 16:30:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:16:03.123 16:30:39 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:16:03.123 16:30:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:03.123 No valid GPT data, bailing 00:16:03.123 16:30:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:03.123 16:30:39 -- scripts/common.sh@394 -- # pt= 00:16:03.123 16:30:39 -- scripts/common.sh@395 -- # return 1 00:16:03.123 16:30:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:16:03.123 1+0 records in 00:16:03.123 1+0 records out 00:16:03.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161074 s, 65.1 MB/s 00:16:03.123 16:30:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:03.123 16:30:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:03.123 16:30:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:16:03.123 16:30:39 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:16:03.123 16:30:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:16:03.381 No valid GPT data, bailing 00:16:03.381 16:30:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:03.381 16:30:39 -- scripts/common.sh@394 -- # pt= 00:16:03.381 16:30:39 -- scripts/common.sh@395 -- # return 1 00:16:03.381 16:30:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:16:03.381 1+0 records in 00:16:03.381 1+0 records out 00:16:03.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449791 s, 233 MB/s 00:16:03.381 16:30:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:03.381 16:30:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:03.381 16:30:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:16:03.381 16:30:39 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:16:03.381 16:30:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:16:03.381 No valid GPT data, bailing 00:16:03.381 16:30:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:16:03.381 16:30:39 -- scripts/common.sh@394 -- # pt= 00:16:03.381 16:30:39 -- scripts/common.sh@395 -- # return 1 00:16:03.381 16:30:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:16:03.381 1+0 records in 00:16:03.381 1+0 records out 00:16:03.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00598588 s, 175 MB/s 00:16:03.381 16:30:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:03.381 16:30:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:03.381 16:30:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:16:03.381 16:30:39 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:16:03.381 16:30:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:16:03.381 No valid GPT data, bailing 00:16:03.381 16:30:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:16:03.381 16:30:39 -- scripts/common.sh@394 -- # pt= 00:16:03.382 16:30:39 -- scripts/common.sh@395 -- # return 1 00:16:03.382 16:30:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:16:03.382 1+0 records in 00:16:03.382 1+0 records out 00:16:03.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505568 s, 207 MB/s 00:16:03.382 16:30:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:03.382 16:30:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:03.382 16:30:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:16:03.382 16:30:39 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:16:03.382 16:30:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:16:03.640 No valid GPT data, bailing 00:16:03.640 16:30:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:16:03.640 16:30:39 -- scripts/common.sh@394 -- # pt= 00:16:03.640 16:30:39 -- scripts/common.sh@395 -- # return 1 00:16:03.640 16:30:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:16:03.640 1+0 records in 00:16:03.640 1+0 records out 00:16:03.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00561388 s, 187 MB/s 00:16:03.640 16:30:39 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:16:03.640 16:30:39 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:16:03.640 16:30:39 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:16:03.640 16:30:39 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:16:03.640 16:30:39 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:16:03.640 No valid GPT data, bailing 00:16:03.640 16:30:39 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:16:03.640 16:30:39 -- scripts/common.sh@394 -- # pt= 00:16:03.640 16:30:39 -- scripts/common.sh@395 -- # return 1 00:16:03.640 16:30:39 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:16:03.640 1+0 records in 00:16:03.640 1+0 records out 00:16:03.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619897 s, 169 MB/s 00:16:03.640 16:30:39 -- spdk/autotest.sh@105 -- # sync 00:16:03.640 16:30:39 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:16:03.640 16:30:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:16:03.640 16:30:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:16:06.928 16:30:42 -- spdk/autotest.sh@111 -- # uname -s 00:16:06.928 16:30:42 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:16:06.928 16:30:42 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:16:06.928 16:30:42 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:16:07.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:07.786 Hugepages 00:16:07.786 node hugesize free / total 00:16:07.786 node0 1048576kB 0 / 0 00:16:07.786 node0 2048kB 0 / 0 00:16:07.786 00:16:07.786 Type BDF Vendor Device NUMA Driver Device Block devices 00:16:08.045 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:16:08.045 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:16:08.303 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:16:08.303 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:16:08.563 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:16:08.563 16:30:44 -- spdk/autotest.sh@117 -- # uname -s 00:16:08.563 16:30:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:16:08.563 16:30:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:16:08.563 16:30:44 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:09.130 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:10.066 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:10.066 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:10.066 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:10.066 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:10.066 16:30:46 -- common/autotest_common.sh@1515 -- # sleep 1 00:16:11.446 16:30:47 -- common/autotest_common.sh@1516 -- # bdfs=() 00:16:11.446 16:30:47 -- common/autotest_common.sh@1516 -- # local bdfs 00:16:11.446 16:30:47 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:16:11.446 16:30:47 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:16:11.446 16:30:47 -- common/autotest_common.sh@1496 -- # bdfs=() 00:16:11.446 16:30:47 -- common/autotest_common.sh@1496 -- # local bdfs 00:16:11.446 16:30:47 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:11.446 16:30:47 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:11.446 16:30:47 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:16:11.446 16:30:47 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:16:11.446 16:30:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:11.446 16:30:47 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:11.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:11.964 Waiting for block devices as requested 00:16:11.964 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:12.222 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:12.222 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:12.480 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:17.793 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:17.793 16:30:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:16:17.793 16:30:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:16:17.793 16:30:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:16:17.793 16:30:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:16:17.793 16:30:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:16:17.793 16:30:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:16:17.793 16:30:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:16:17.793 16:30:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:16:17.793 16:30:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:16:17.793 16:30:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:16:17.793 16:30:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:16:17.793 16:30:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:16:17.793 16:30:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:16:17.793 16:30:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:16:17.793 16:30:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:16:17.793 16:30:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:16:17.793 16:30:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:16:17.793 16:30:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:16:17.793 16:30:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:16:17.793 16:30:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:16:17.793 16:30:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:16:17.793 16:30:53 -- common/autotest_common.sh@1541 -- # continue 00:16:17.793 16:30:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:16:17.793 16:30:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:16:17.793 16:30:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:16:17.793 16:30:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:16:17.793 16:30:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:16:17.793 16:30:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:16:17.793 16:30:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:16:17.793 16:30:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:16:17.793 16:30:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:16:17.793 16:30:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:16:17.793 16:30:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:16:17.794 16:30:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:16:17.794 16:30:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:16:17.794 16:30:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:16:17.794 16:30:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:16:17.794 16:30:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:16:17.794 16:30:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:16:17.794 16:30:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:16:17.794 16:30:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:16:17.794 16:30:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:16:17.794 16:30:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:16:17.794 16:30:53 -- common/autotest_common.sh@1541 -- # continue 00:16:17.794 16:30:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:16:17.794 16:30:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:16:17.794 16:30:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:16:17.794 16:30:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:16:17.794 16:30:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:16:17.794 16:30:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:16:17.794 16:30:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:16:17.794 16:30:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:16:17.794 16:30:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:16:17.794 16:30:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:16:17.794 16:30:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:16:17.794 16:30:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:16:17.794 16:30:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:16:17.794 16:30:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:16:17.794 16:30:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:16:17.794 16:30:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:16:17.794 16:30:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:16:17.794 16:30:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:16:17.794 16:30:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:16:17.794 16:30:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:16:17.794 16:30:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:16:17.794 16:30:53 -- common/autotest_common.sh@1541 -- # continue 00:16:17.794 16:30:53 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:16:17.794 16:30:53 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:16:17.794 16:30:53 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:16:17.794 16:30:53 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:16:17.794 16:30:53 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:16:17.794 16:30:53 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:16:17.794 16:30:53 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:16:17.794 16:30:53 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:16:17.794 16:30:53 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:16:17.794 16:30:53 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:16:17.794 16:30:53 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:16:17.794 16:30:53 -- common/autotest_common.sh@1529 -- # grep oacs 00:16:17.794 16:30:53 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:16:17.794 16:30:53 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:16:17.794 16:30:53 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:16:17.794 16:30:53 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:16:17.794 16:30:53 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:16:17.794 16:30:53 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:16:17.794 16:30:53 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:16:17.794 16:30:53 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:16:17.794 16:30:53 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:16:17.794 16:30:53 -- common/autotest_common.sh@1541 -- # continue 00:16:17.794 16:30:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:16:17.794 16:30:53 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:17.794 16:30:53 -- common/autotest_common.sh@10 -- # set +x 00:16:17.794 16:30:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:16:17.794 16:30:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:17.794 16:30:53 -- common/autotest_common.sh@10 -- # set +x 00:16:17.794 16:30:53 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:18.362 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:19.299 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:19.299 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:19.299 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:19.299 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:19.299 16:30:55 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:16:19.299 16:30:55 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:19.300 16:30:55 -- common/autotest_common.sh@10 -- # set +x 00:16:19.300 16:30:55 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:16:19.300 16:30:55 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:16:19.300 16:30:55 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:16:19.300 16:30:55 -- common/autotest_common.sh@1561 -- # bdfs=() 00:16:19.300 16:30:55 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:16:19.300 16:30:55 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:16:19.300 16:30:55 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:16:19.300 16:30:55 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:16:19.300 16:30:55 -- common/autotest_common.sh@1496 -- # bdfs=() 00:16:19.300 16:30:55 -- common/autotest_common.sh@1496 -- # local bdfs 00:16:19.300 16:30:55 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:19.300 16:30:55 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:16:19.300 16:30:55 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:19.559 16:30:55 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:16:19.559 16:30:55 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:16:19.559 16:30:55 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:16:19.559 16:30:55 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:16:19.559 16:30:55 -- common/autotest_common.sh@1564 -- # device=0x0010 00:16:19.559 16:30:55 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:16:19.559 16:30:55 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:16:19.559 16:30:55 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:16:19.559 16:30:55 -- common/autotest_common.sh@1564 -- # device=0x0010 00:16:19.559 16:30:55 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:16:19.559 16:30:55 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:16:19.559 16:30:55 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:16:19.559 16:30:55 -- common/autotest_common.sh@1564 -- # device=0x0010 00:16:19.559 16:30:55 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:16:19.559 16:30:55 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:16:19.559 16:30:55 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:16:19.559 16:30:55 -- common/autotest_common.sh@1564 -- # device=0x0010 00:16:19.559 16:30:55 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:16:19.559 16:30:55 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:16:19.559 16:30:55 -- common/autotest_common.sh@1570 -- # return 0 00:16:19.559 16:30:55 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:16:19.559 16:30:55 -- common/autotest_common.sh@1578 -- # return 0 00:16:19.559 16:30:55 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:16:19.559 16:30:55 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:16:19.559 16:30:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:16:19.559 16:30:55 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:16:19.559 16:30:55 -- spdk/autotest.sh@149 -- # timing_enter lib 00:16:19.559 16:30:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:19.559 16:30:55 -- common/autotest_common.sh@10 -- # set +x 00:16:19.559 16:30:55 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:16:19.559 16:30:55 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:16:19.559 16:30:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:19.559 16:30:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.559 16:30:55 -- common/autotest_common.sh@10 -- # set +x 00:16:19.559 ************************************ 00:16:19.559 START TEST env 00:16:19.559 ************************************ 00:16:19.559 16:30:55 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:16:19.818 * Looking for test storage... 00:16:19.818 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:16:19.818 16:30:55 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:19.819 16:30:55 env -- common/autotest_common.sh@1691 -- # lcov --version 00:16:19.819 16:30:55 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:19.819 16:30:55 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:19.819 16:30:55 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:19.819 16:30:55 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:19.819 16:30:55 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:19.819 16:30:55 env -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.819 16:30:55 env -- scripts/common.sh@336 -- # read -ra ver1 00:16:19.819 16:30:55 env -- scripts/common.sh@337 -- # IFS=.-: 00:16:19.819 16:30:55 env -- scripts/common.sh@337 -- # read -ra ver2 00:16:19.819 16:30:55 env -- scripts/common.sh@338 -- # local 'op=<' 00:16:19.819 16:30:55 env -- scripts/common.sh@340 -- # ver1_l=2 00:16:19.819 16:30:55 env -- scripts/common.sh@341 -- # ver2_l=1 00:16:19.819 16:30:55 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:19.819 16:30:55 env -- scripts/common.sh@344 -- # case "$op" in 00:16:19.819 16:30:55 env -- scripts/common.sh@345 -- # : 1 00:16:19.819 16:30:55 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:19.819 16:30:55 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.819 16:30:55 env -- scripts/common.sh@365 -- # decimal 1 00:16:19.819 16:30:55 env -- scripts/common.sh@353 -- # local d=1 00:16:19.819 16:30:55 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.819 16:30:55 env -- scripts/common.sh@355 -- # echo 1 00:16:19.819 16:30:55 env -- scripts/common.sh@365 -- # ver1[v]=1 00:16:19.819 16:30:55 env -- scripts/common.sh@366 -- # decimal 2 00:16:19.819 16:30:55 env -- scripts/common.sh@353 -- # local d=2 00:16:19.819 16:30:55 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.819 16:30:55 env -- scripts/common.sh@355 -- # echo 2 00:16:19.819 16:30:55 env -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.819 16:30:55 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.819 16:30:55 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.819 16:30:55 env -- scripts/common.sh@368 -- # return 0 00:16:19.819 16:30:55 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.819 16:30:55 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:19.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.819 --rc genhtml_branch_coverage=1 00:16:19.819 --rc genhtml_function_coverage=1 00:16:19.819 --rc genhtml_legend=1 00:16:19.819 --rc geninfo_all_blocks=1 00:16:19.819 --rc geninfo_unexecuted_blocks=1 00:16:19.819 00:16:19.819 ' 00:16:19.819 16:30:55 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:19.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.819 --rc genhtml_branch_coverage=1 00:16:19.819 --rc genhtml_function_coverage=1 00:16:19.819 --rc genhtml_legend=1 00:16:19.819 --rc geninfo_all_blocks=1 00:16:19.819 --rc geninfo_unexecuted_blocks=1 00:16:19.819 00:16:19.819 ' 00:16:19.819 16:30:55 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:19.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.819 --rc genhtml_branch_coverage=1 00:16:19.819 --rc genhtml_function_coverage=1 00:16:19.819 --rc genhtml_legend=1 00:16:19.819 --rc geninfo_all_blocks=1 00:16:19.819 --rc geninfo_unexecuted_blocks=1 00:16:19.819 00:16:19.819 ' 00:16:19.819 16:30:55 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:19.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.819 --rc genhtml_branch_coverage=1 00:16:19.819 --rc genhtml_function_coverage=1 00:16:19.819 --rc genhtml_legend=1 00:16:19.819 --rc geninfo_all_blocks=1 00:16:19.819 --rc geninfo_unexecuted_blocks=1 00:16:19.819 00:16:19.819 ' 00:16:19.819 16:30:55 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:16:19.819 16:30:55 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:19.819 16:30:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.819 16:30:55 env -- common/autotest_common.sh@10 -- # set +x 00:16:19.819 ************************************ 00:16:19.819 START TEST env_memory 00:16:19.819 ************************************ 00:16:19.819 16:30:55 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:16:19.819 00:16:19.819 00:16:19.819 CUnit - A unit testing framework for C - Version 2.1-3 00:16:19.819 http://cunit.sourceforge.net/ 00:16:19.819 00:16:19.819 00:16:19.819 Suite: memory 00:16:19.819 Test: alloc and free memory map ...[2024-10-17 16:30:56.065816] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:16:20.078 passed 00:16:20.078 Test: mem map translation ...[2024-10-17 16:30:56.136012] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:16:20.078 [2024-10-17 16:30:56.136263] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:16:20.078 [2024-10-17 16:30:56.136522] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:16:20.078 [2024-10-17 16:30:56.136753] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:16:20.078 passed 00:16:20.078 Test: mem map registration ...[2024-10-17 16:30:56.230336] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:16:20.078 [2024-10-17 16:30:56.230427] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:16:20.078 passed 00:16:20.078 Test: mem map adjacent registrations ...passed 00:16:20.078 00:16:20.078 Run Summary: Type Total Ran Passed Failed Inactive 00:16:20.078 suites 1 1 n/a 0 0 00:16:20.078 tests 4 4 4 0 0 00:16:20.078 asserts 152 152 152 0 n/a 00:16:20.078 00:16:20.078 Elapsed time = 0.304 seconds 00:16:20.078 00:16:20.078 real 0m0.351s 00:16:20.078 user 0m0.313s 00:16:20.078 sys 0m0.028s 00:16:20.078 ************************************ 00:16:20.078 END TEST env_memory 00:16:20.078 ************************************ 00:16:20.078 16:30:56 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:20.078 16:30:56 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:16:20.336 16:30:56 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:16:20.336 16:30:56 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:20.336 16:30:56 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:20.336 16:30:56 env -- common/autotest_common.sh@10 -- # set +x 00:16:20.336 ************************************ 00:16:20.336 START TEST env_vtophys 00:16:20.336 ************************************ 00:16:20.336 16:30:56 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:16:20.336 EAL: lib.eal log level changed from notice to debug 00:16:20.336 EAL: Detected lcore 0 as core 0 on socket 0 00:16:20.336 EAL: Detected lcore 1 as core 0 on socket 0 00:16:20.336 EAL: Detected lcore 2 as core 0 on socket 0 00:16:20.336 EAL: Detected lcore 3 as core 0 on socket 0 00:16:20.336 EAL: Detected lcore 4 as core 0 on socket 0 00:16:20.336 EAL: Detected lcore 5 as core 0 on socket 0 00:16:20.336 EAL: Detected lcore 6 as core 0 on socket 0 00:16:20.336 EAL: Detected lcore 7 as core 0 on socket 0 00:16:20.336 EAL: Detected lcore 8 as core 0 on socket 0 00:16:20.336 EAL: Detected lcore 9 as core 0 on socket 0 00:16:20.336 EAL: Maximum logical cores by configuration: 128 00:16:20.336 EAL: Detected CPU lcores: 10 00:16:20.336 EAL: Detected NUMA nodes: 1 00:16:20.336 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:16:20.336 EAL: Detected shared linkage of DPDK 00:16:20.336 EAL: No shared files mode enabled, IPC will be disabled 00:16:20.336 EAL: Selected IOVA mode 'PA' 00:16:20.336 EAL: Probing VFIO support... 00:16:20.336 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:16:20.336 EAL: VFIO modules not loaded, skipping VFIO support... 00:16:20.336 EAL: Ask a virtual area of 0x2e000 bytes 00:16:20.336 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:16:20.336 EAL: Setting up physically contiguous memory... 00:16:20.336 EAL: Setting maximum number of open files to 524288 00:16:20.336 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:16:20.336 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:16:20.336 EAL: Ask a virtual area of 0x61000 bytes 00:16:20.336 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:16:20.336 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:20.336 EAL: Ask a virtual area of 0x400000000 bytes 00:16:20.336 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:16:20.336 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:16:20.336 EAL: Ask a virtual area of 0x61000 bytes 00:16:20.336 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:16:20.336 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:20.336 EAL: Ask a virtual area of 0x400000000 bytes 00:16:20.336 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:16:20.336 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:16:20.336 EAL: Ask a virtual area of 0x61000 bytes 00:16:20.336 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:16:20.336 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:20.336 EAL: Ask a virtual area of 0x400000000 bytes 00:16:20.336 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:16:20.336 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:16:20.336 EAL: Ask a virtual area of 0x61000 bytes 00:16:20.336 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:16:20.336 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:20.337 EAL: Ask a virtual area of 0x400000000 bytes 00:16:20.337 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:16:20.337 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:16:20.337 EAL: Hugepages will be freed exactly as allocated. 00:16:20.337 EAL: No shared files mode enabled, IPC is disabled 00:16:20.337 EAL: No shared files mode enabled, IPC is disabled 00:16:20.337 EAL: TSC frequency is ~2490000 KHz 00:16:20.337 EAL: Main lcore 0 is ready (tid=7f802b827a40;cpuset=[0]) 00:16:20.337 EAL: Trying to obtain current memory policy. 00:16:20.337 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:20.337 EAL: Restoring previous memory policy: 0 00:16:20.337 EAL: request: mp_malloc_sync 00:16:20.337 EAL: No shared files mode enabled, IPC is disabled 00:16:20.337 EAL: Heap on socket 0 was expanded by 2MB 00:16:20.337 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:16:20.337 EAL: No PCI address specified using 'addr=' in: bus=pci 00:16:20.337 EAL: Mem event callback 'spdk:(nil)' registered 00:16:20.337 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:16:20.595 00:16:20.595 00:16:20.595 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.595 http://cunit.sourceforge.net/ 00:16:20.595 00:16:20.595 00:16:20.595 Suite: components_suite 00:16:20.853 Test: vtophys_malloc_test ...passed 00:16:20.853 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:16:20.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:20.853 EAL: Restoring previous memory policy: 4 00:16:20.853 EAL: Calling mem event callback 'spdk:(nil)' 00:16:20.854 EAL: request: mp_malloc_sync 00:16:20.854 EAL: No shared files mode enabled, IPC is disabled 00:16:20.854 EAL: Heap on socket 0 was expanded by 4MB 00:16:20.854 EAL: Calling mem event callback 'spdk:(nil)' 00:16:20.854 EAL: request: mp_malloc_sync 00:16:20.854 EAL: No shared files mode enabled, IPC is disabled 00:16:20.854 EAL: Heap on socket 0 was shrunk by 4MB 00:16:20.854 EAL: Trying to obtain current memory policy. 00:16:20.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:20.854 EAL: Restoring previous memory policy: 4 00:16:20.854 EAL: Calling mem event callback 'spdk:(nil)' 00:16:20.854 EAL: request: mp_malloc_sync 00:16:20.854 EAL: No shared files mode enabled, IPC is disabled 00:16:20.854 EAL: Heap on socket 0 was expanded by 6MB 00:16:20.854 EAL: Calling mem event callback 'spdk:(nil)' 00:16:20.854 EAL: request: mp_malloc_sync 00:16:20.854 EAL: No shared files mode enabled, IPC is disabled 00:16:20.854 EAL: Heap on socket 0 was shrunk by 6MB 00:16:20.854 EAL: Trying to obtain current memory policy. 00:16:20.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:20.854 EAL: Restoring previous memory policy: 4 00:16:20.854 EAL: Calling mem event callback 'spdk:(nil)' 00:16:20.854 EAL: request: mp_malloc_sync 00:16:20.854 EAL: No shared files mode enabled, IPC is disabled 00:16:20.854 EAL: Heap on socket 0 was expanded by 10MB 00:16:20.854 EAL: Calling mem event callback 'spdk:(nil)' 00:16:20.854 EAL: request: mp_malloc_sync 00:16:20.854 EAL: No shared files mode enabled, IPC is disabled 00:16:20.854 EAL: Heap on socket 0 was shrunk by 10MB 00:16:20.854 EAL: Trying to obtain current memory policy. 00:16:20.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:20.854 EAL: Restoring previous memory policy: 4 00:16:20.854 EAL: Calling mem event callback 'spdk:(nil)' 00:16:20.854 EAL: request: mp_malloc_sync 00:16:20.854 EAL: No shared files mode enabled, IPC is disabled 00:16:20.854 EAL: Heap on socket 0 was expanded by 18MB 00:16:21.112 EAL: Calling mem event callback 'spdk:(nil)' 00:16:21.112 EAL: request: mp_malloc_sync 00:16:21.112 EAL: No shared files mode enabled, IPC is disabled 00:16:21.112 EAL: Heap on socket 0 was shrunk by 18MB 00:16:21.112 EAL: Trying to obtain current memory policy. 00:16:21.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:21.112 EAL: Restoring previous memory policy: 4 00:16:21.112 EAL: Calling mem event callback 'spdk:(nil)' 00:16:21.112 EAL: request: mp_malloc_sync 00:16:21.112 EAL: No shared files mode enabled, IPC is disabled 00:16:21.112 EAL: Heap on socket 0 was expanded by 34MB 00:16:21.112 EAL: Calling mem event callback 'spdk:(nil)' 00:16:21.112 EAL: request: mp_malloc_sync 00:16:21.112 EAL: No shared files mode enabled, IPC is disabled 00:16:21.112 EAL: Heap on socket 0 was shrunk by 34MB 00:16:21.112 EAL: Trying to obtain current memory policy. 00:16:21.112 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:21.112 EAL: Restoring previous memory policy: 4 00:16:21.112 EAL: Calling mem event callback 'spdk:(nil)' 00:16:21.112 EAL: request: mp_malloc_sync 00:16:21.112 EAL: No shared files mode enabled, IPC is disabled 00:16:21.112 EAL: Heap on socket 0 was expanded by 66MB 00:16:21.371 EAL: Calling mem event callback 'spdk:(nil)' 00:16:21.371 EAL: request: mp_malloc_sync 00:16:21.371 EAL: No shared files mode enabled, IPC is disabled 00:16:21.371 EAL: Heap on socket 0 was shrunk by 66MB 00:16:21.371 EAL: Trying to obtain current memory policy. 00:16:21.371 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:21.371 EAL: Restoring previous memory policy: 4 00:16:21.371 EAL: Calling mem event callback 'spdk:(nil)' 00:16:21.371 EAL: request: mp_malloc_sync 00:16:21.371 EAL: No shared files mode enabled, IPC is disabled 00:16:21.371 EAL: Heap on socket 0 was expanded by 130MB 00:16:21.664 EAL: Calling mem event callback 'spdk:(nil)' 00:16:21.664 EAL: request: mp_malloc_sync 00:16:21.664 EAL: No shared files mode enabled, IPC is disabled 00:16:21.664 EAL: Heap on socket 0 was shrunk by 130MB 00:16:21.923 EAL: Trying to obtain current memory policy. 00:16:21.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:21.923 EAL: Restoring previous memory policy: 4 00:16:21.923 EAL: Calling mem event callback 'spdk:(nil)' 00:16:21.923 EAL: request: mp_malloc_sync 00:16:21.923 EAL: No shared files mode enabled, IPC is disabled 00:16:21.923 EAL: Heap on socket 0 was expanded by 258MB 00:16:22.492 EAL: Calling mem event callback 'spdk:(nil)' 00:16:22.492 EAL: request: mp_malloc_sync 00:16:22.492 EAL: No shared files mode enabled, IPC is disabled 00:16:22.492 EAL: Heap on socket 0 was shrunk by 258MB 00:16:22.751 EAL: Trying to obtain current memory policy. 00:16:22.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:23.010 EAL: Restoring previous memory policy: 4 00:16:23.010 EAL: Calling mem event callback 'spdk:(nil)' 00:16:23.010 EAL: request: mp_malloc_sync 00:16:23.010 EAL: No shared files mode enabled, IPC is disabled 00:16:23.010 EAL: Heap on socket 0 was expanded by 514MB 00:16:23.947 EAL: Calling mem event callback 'spdk:(nil)' 00:16:23.947 EAL: request: mp_malloc_sync 00:16:23.947 EAL: No shared files mode enabled, IPC is disabled 00:16:23.947 EAL: Heap on socket 0 was shrunk by 514MB 00:16:24.885 EAL: Trying to obtain current memory policy. 00:16:24.885 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:25.145 EAL: Restoring previous memory policy: 4 00:16:25.145 EAL: Calling mem event callback 'spdk:(nil)' 00:16:25.145 EAL: request: mp_malloc_sync 00:16:25.145 EAL: No shared files mode enabled, IPC is disabled 00:16:25.145 EAL: Heap on socket 0 was expanded by 1026MB 00:16:27.051 EAL: Calling mem event callback 'spdk:(nil)' 00:16:27.051 EAL: request: mp_malloc_sync 00:16:27.051 EAL: No shared files mode enabled, IPC is disabled 00:16:27.051 EAL: Heap on socket 0 was shrunk by 1026MB 00:16:28.957 passed 00:16:28.957 00:16:28.957 Run Summary: Type Total Ran Passed Failed Inactive 00:16:28.957 suites 1 1 n/a 0 0 00:16:28.957 tests 2 2 2 0 0 00:16:28.957 asserts 5614 5614 5614 0 n/a 00:16:28.957 00:16:28.957 Elapsed time = 8.413 seconds 00:16:28.957 EAL: Calling mem event callback 'spdk:(nil)' 00:16:28.957 EAL: request: mp_malloc_sync 00:16:28.957 EAL: No shared files mode enabled, IPC is disabled 00:16:28.957 EAL: Heap on socket 0 was shrunk by 2MB 00:16:28.957 EAL: No shared files mode enabled, IPC is disabled 00:16:28.957 EAL: No shared files mode enabled, IPC is disabled 00:16:28.957 EAL: No shared files mode enabled, IPC is disabled 00:16:28.957 00:16:28.957 real 0m8.757s 00:16:28.957 user 0m7.703s 00:16:28.957 sys 0m0.888s 00:16:28.957 16:31:05 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.957 ************************************ 00:16:28.957 END TEST env_vtophys 00:16:28.957 ************************************ 00:16:28.957 16:31:05 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:16:28.957 16:31:05 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:16:28.957 16:31:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:28.957 16:31:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.957 16:31:05 env -- common/autotest_common.sh@10 -- # set +x 00:16:28.957 ************************************ 00:16:28.957 START TEST env_pci 00:16:28.957 ************************************ 00:16:28.957 16:31:05 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:16:29.217 00:16:29.217 00:16:29.217 CUnit - A unit testing framework for C - Version 2.1-3 00:16:29.217 http://cunit.sourceforge.net/ 00:16:29.217 00:16:29.217 00:16:29.217 Suite: pci 00:16:29.217 Test: pci_hook ...[2024-10-17 16:31:05.285456] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57619 has claimed it 00:16:29.217 EAL: Cannot find device (10000:00:01.0) 00:16:29.217 passed 00:16:29.217 00:16:29.217 Run Summary: Type Total Ran Passed Failed Inactive 00:16:29.217 suites 1 1 n/a 0 0 00:16:29.217 tests 1 1 1 0 0 00:16:29.217 asserts 25 25 25 0 n/a 00:16:29.217 00:16:29.217 Elapsed time = 0.013 seconds 00:16:29.217 EAL: Failed to attach device on primary process 00:16:29.217 00:16:29.217 real 0m0.125s 00:16:29.217 user 0m0.048s 00:16:29.217 sys 0m0.075s 00:16:29.217 ************************************ 00:16:29.217 END TEST env_pci 00:16:29.217 ************************************ 00:16:29.217 16:31:05 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:29.217 16:31:05 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:16:29.217 16:31:05 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:16:29.217 16:31:05 env -- env/env.sh@15 -- # uname 00:16:29.217 16:31:05 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:16:29.217 16:31:05 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:16:29.217 16:31:05 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:16:29.217 16:31:05 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:29.217 16:31:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:29.217 16:31:05 env -- common/autotest_common.sh@10 -- # set +x 00:16:29.217 ************************************ 00:16:29.217 START TEST env_dpdk_post_init 00:16:29.217 ************************************ 00:16:29.217 16:31:05 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:16:29.476 EAL: Detected CPU lcores: 10 00:16:29.476 EAL: Detected NUMA nodes: 1 00:16:29.476 EAL: Detected shared linkage of DPDK 00:16:29.476 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:16:29.476 EAL: Selected IOVA mode 'PA' 00:16:29.476 TELEMETRY: No legacy callbacks, legacy socket not created 00:16:29.476 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:16:29.476 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:16:29.476 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:16:29.476 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:16:29.476 Starting DPDK initialization... 00:16:29.476 Starting SPDK post initialization... 00:16:29.476 SPDK NVMe probe 00:16:29.476 Attaching to 0000:00:10.0 00:16:29.476 Attaching to 0000:00:11.0 00:16:29.476 Attaching to 0000:00:12.0 00:16:29.476 Attaching to 0000:00:13.0 00:16:29.476 Attached to 0000:00:10.0 00:16:29.476 Attached to 0000:00:11.0 00:16:29.476 Attached to 0000:00:13.0 00:16:29.476 Attached to 0000:00:12.0 00:16:29.476 Cleaning up... 00:16:29.735 00:16:29.735 real 0m0.324s 00:16:29.735 user 0m0.099s 00:16:29.735 sys 0m0.130s 00:16:29.735 ************************************ 00:16:29.735 END TEST env_dpdk_post_init 00:16:29.735 ************************************ 00:16:29.735 16:31:05 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:29.735 16:31:05 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:16:29.735 16:31:05 env -- env/env.sh@26 -- # uname 00:16:29.735 16:31:05 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:16:29.735 16:31:05 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:16:29.735 16:31:05 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:29.735 16:31:05 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:29.735 16:31:05 env -- common/autotest_common.sh@10 -- # set +x 00:16:29.735 ************************************ 00:16:29.735 START TEST env_mem_callbacks 00:16:29.735 ************************************ 00:16:29.735 16:31:05 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:16:29.735 EAL: Detected CPU lcores: 10 00:16:29.735 EAL: Detected NUMA nodes: 1 00:16:29.735 EAL: Detected shared linkage of DPDK 00:16:29.735 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:16:29.735 EAL: Selected IOVA mode 'PA' 00:16:29.995 00:16:29.995 00:16:29.995 CUnit - A unit testing framework for C - Version 2.1-3 00:16:29.995 http://cunit.sourceforge.net/ 00:16:29.995 00:16:29.995 00:16:29.995 Suite: memory 00:16:29.995 Test: test ... 00:16:29.995 TELEMETRY: No legacy callbacks, legacy socket not created 00:16:29.995 register 0x200000200000 2097152 00:16:29.995 malloc 3145728 00:16:29.995 register 0x200000400000 4194304 00:16:29.995 buf 0x2000004fffc0 len 3145728 PASSED 00:16:29.995 malloc 64 00:16:29.995 buf 0x2000004ffec0 len 64 PASSED 00:16:29.995 malloc 4194304 00:16:29.995 register 0x200000800000 6291456 00:16:29.995 buf 0x2000009fffc0 len 4194304 PASSED 00:16:29.995 free 0x2000004fffc0 3145728 00:16:29.995 free 0x2000004ffec0 64 00:16:29.995 unregister 0x200000400000 4194304 PASSED 00:16:29.995 free 0x2000009fffc0 4194304 00:16:29.995 unregister 0x200000800000 6291456 PASSED 00:16:29.995 malloc 8388608 00:16:29.995 register 0x200000400000 10485760 00:16:29.995 buf 0x2000005fffc0 len 8388608 PASSED 00:16:29.995 free 0x2000005fffc0 8388608 00:16:29.995 unregister 0x200000400000 10485760 PASSED 00:16:29.995 passed 00:16:29.995 00:16:29.995 Run Summary: Type Total Ran Passed Failed Inactive 00:16:29.995 suites 1 1 n/a 0 0 00:16:29.995 tests 1 1 1 0 0 00:16:29.995 asserts 15 15 15 0 n/a 00:16:29.995 00:16:29.995 Elapsed time = 0.081 seconds 00:16:29.995 00:16:29.995 real 0m0.307s 00:16:29.995 user 0m0.113s 00:16:29.995 sys 0m0.090s 00:16:29.995 16:31:06 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:29.995 ************************************ 00:16:29.995 END TEST env_mem_callbacks 00:16:29.995 ************************************ 00:16:29.995 16:31:06 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:16:29.995 ************************************ 00:16:29.995 END TEST env 00:16:29.995 ************************************ 00:16:29.995 00:16:29.995 real 0m10.467s 00:16:29.995 user 0m8.537s 00:16:29.995 sys 0m1.559s 00:16:29.995 16:31:06 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:29.995 16:31:06 env -- common/autotest_common.sh@10 -- # set +x 00:16:29.995 16:31:06 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:16:29.995 16:31:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:29.995 16:31:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:29.995 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:16:29.995 ************************************ 00:16:29.995 START TEST rpc 00:16:29.995 ************************************ 00:16:29.995 16:31:06 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:16:30.255 * Looking for test storage... 00:16:30.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:30.255 16:31:06 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.255 16:31:06 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.255 16:31:06 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.255 16:31:06 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.255 16:31:06 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.255 16:31:06 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.255 16:31:06 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.255 16:31:06 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.255 16:31:06 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.255 16:31:06 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.255 16:31:06 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.255 16:31:06 rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:30.255 16:31:06 rpc -- scripts/common.sh@345 -- # : 1 00:16:30.255 16:31:06 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.255 16:31:06 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.255 16:31:06 rpc -- scripts/common.sh@365 -- # decimal 1 00:16:30.255 16:31:06 rpc -- scripts/common.sh@353 -- # local d=1 00:16:30.255 16:31:06 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.255 16:31:06 rpc -- scripts/common.sh@355 -- # echo 1 00:16:30.255 16:31:06 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.255 16:31:06 rpc -- scripts/common.sh@366 -- # decimal 2 00:16:30.255 16:31:06 rpc -- scripts/common.sh@353 -- # local d=2 00:16:30.255 16:31:06 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.255 16:31:06 rpc -- scripts/common.sh@355 -- # echo 2 00:16:30.255 16:31:06 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.255 16:31:06 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.255 16:31:06 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.255 16:31:06 rpc -- scripts/common.sh@368 -- # return 0 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:30.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.255 --rc genhtml_branch_coverage=1 00:16:30.255 --rc genhtml_function_coverage=1 00:16:30.255 --rc genhtml_legend=1 00:16:30.255 --rc geninfo_all_blocks=1 00:16:30.255 --rc geninfo_unexecuted_blocks=1 00:16:30.255 00:16:30.255 ' 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:30.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.255 --rc genhtml_branch_coverage=1 00:16:30.255 --rc genhtml_function_coverage=1 00:16:30.255 --rc genhtml_legend=1 00:16:30.255 --rc geninfo_all_blocks=1 00:16:30.255 --rc geninfo_unexecuted_blocks=1 00:16:30.255 00:16:30.255 ' 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:30.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.255 --rc genhtml_branch_coverage=1 00:16:30.255 --rc genhtml_function_coverage=1 00:16:30.255 --rc genhtml_legend=1 00:16:30.255 --rc geninfo_all_blocks=1 00:16:30.255 --rc geninfo_unexecuted_blocks=1 00:16:30.255 00:16:30.255 ' 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:30.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.255 --rc genhtml_branch_coverage=1 00:16:30.255 --rc genhtml_function_coverage=1 00:16:30.255 --rc genhtml_legend=1 00:16:30.255 --rc geninfo_all_blocks=1 00:16:30.255 --rc geninfo_unexecuted_blocks=1 00:16:30.255 00:16:30.255 ' 00:16:30.255 16:31:06 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:16:30.255 16:31:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57752 00:16:30.255 16:31:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:16:30.255 16:31:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57752 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@831 -- # '[' -z 57752 ']' 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:30.255 16:31:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:16:30.514 [2024-10-17 16:31:06.614843] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:16:30.514 [2024-10-17 16:31:06.615153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57752 ] 00:16:30.514 [2024-10-17 16:31:06.789025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.773 [2024-10-17 16:31:06.910242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:16:30.773 [2024-10-17 16:31:06.910466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57752' to capture a snapshot of events at runtime. 00:16:30.773 [2024-10-17 16:31:06.910566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.773 [2024-10-17 16:31:06.910623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.773 [2024-10-17 16:31:06.910654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57752 for offline analysis/debug. 00:16:30.773 [2024-10-17 16:31:06.912092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.710 16:31:07 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.710 16:31:07 rpc -- common/autotest_common.sh@864 -- # return 0 00:16:31.710 16:31:07 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:16:31.710 16:31:07 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:16:31.710 16:31:07 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:16:31.710 16:31:07 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:16:31.710 16:31:07 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:31.710 16:31:07 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:31.710 16:31:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.710 ************************************ 00:16:31.710 START TEST rpc_integrity 00:16:31.710 ************************************ 00:16:31.710 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:16:31.710 16:31:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:31.710 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.710 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:31.710 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.710 16:31:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:16:31.710 16:31:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:16:31.710 16:31:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:16:31.710 16:31:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:16:31.710 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.710 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:31.710 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.710 16:31:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:16:31.710 16:31:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:16:31.710 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.710 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:31.711 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.711 16:31:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:16:31.711 { 00:16:31.711 "name": "Malloc0", 00:16:31.711 "aliases": [ 00:16:31.711 "14e0287a-5d8d-4b41-b535-08747cf877b6" 00:16:31.711 ], 00:16:31.711 "product_name": "Malloc disk", 00:16:31.711 "block_size": 512, 00:16:31.711 "num_blocks": 16384, 00:16:31.711 "uuid": "14e0287a-5d8d-4b41-b535-08747cf877b6", 00:16:31.711 "assigned_rate_limits": { 00:16:31.711 "rw_ios_per_sec": 0, 00:16:31.711 "rw_mbytes_per_sec": 0, 00:16:31.711 "r_mbytes_per_sec": 0, 00:16:31.711 "w_mbytes_per_sec": 0 00:16:31.711 }, 00:16:31.711 "claimed": false, 00:16:31.711 "zoned": false, 00:16:31.711 "supported_io_types": { 00:16:31.711 "read": true, 00:16:31.711 "write": true, 00:16:31.711 "unmap": true, 00:16:31.711 "flush": true, 00:16:31.711 "reset": true, 00:16:31.711 "nvme_admin": false, 00:16:31.711 "nvme_io": false, 00:16:31.711 "nvme_io_md": false, 00:16:31.711 "write_zeroes": true, 00:16:31.711 "zcopy": true, 00:16:31.711 "get_zone_info": false, 00:16:31.711 "zone_management": false, 00:16:31.711 "zone_append": false, 00:16:31.711 "compare": false, 00:16:31.711 "compare_and_write": false, 00:16:31.711 "abort": true, 00:16:31.711 "seek_hole": false, 00:16:31.711 "seek_data": false, 00:16:31.711 "copy": true, 00:16:31.711 "nvme_iov_md": false 00:16:31.711 }, 00:16:31.711 "memory_domains": [ 00:16:31.711 { 00:16:31.711 "dma_device_id": "system", 00:16:31.711 "dma_device_type": 1 00:16:31.711 }, 00:16:31.711 { 00:16:31.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.711 "dma_device_type": 2 00:16:31.711 } 00:16:31.711 ], 00:16:31.711 "driver_specific": {} 00:16:31.711 } 00:16:31.711 ]' 00:16:31.711 16:31:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:16:31.711 16:31:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:16:31.711 16:31:07 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:16:31.711 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.711 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:31.711 [2024-10-17 16:31:07.988363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:16:31.711 [2024-10-17 16:31:07.988461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.711 [2024-10-17 16:31:07.988505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:31.711 [2024-10-17 16:31:07.988522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.711 [2024-10-17 16:31:07.991364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.711 [2024-10-17 16:31:07.991419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:16:31.711 Passthru0 00:16:31.711 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.711 16:31:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:16:31.711 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.711 16:31:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:31.997 16:31:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.997 16:31:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:16:31.997 { 00:16:31.997 "name": "Malloc0", 00:16:31.997 "aliases": [ 00:16:31.997 "14e0287a-5d8d-4b41-b535-08747cf877b6" 00:16:31.997 ], 00:16:31.998 "product_name": "Malloc disk", 00:16:31.998 "block_size": 512, 00:16:31.998 "num_blocks": 16384, 00:16:31.998 "uuid": "14e0287a-5d8d-4b41-b535-08747cf877b6", 00:16:31.998 "assigned_rate_limits": { 00:16:31.998 "rw_ios_per_sec": 0, 00:16:31.998 "rw_mbytes_per_sec": 0, 00:16:31.998 "r_mbytes_per_sec": 0, 00:16:31.998 "w_mbytes_per_sec": 0 00:16:31.998 }, 00:16:31.998 "claimed": true, 00:16:31.998 "claim_type": "exclusive_write", 00:16:31.998 "zoned": false, 00:16:31.998 "supported_io_types": { 00:16:31.998 "read": true, 00:16:31.998 "write": true, 00:16:31.998 "unmap": true, 00:16:31.998 "flush": true, 00:16:31.998 "reset": true, 00:16:31.998 "nvme_admin": false, 00:16:31.998 "nvme_io": false, 00:16:31.998 "nvme_io_md": false, 00:16:31.998 "write_zeroes": true, 00:16:31.998 "zcopy": true, 00:16:31.998 "get_zone_info": false, 00:16:31.998 "zone_management": false, 00:16:31.998 "zone_append": false, 00:16:31.998 "compare": false, 00:16:31.998 "compare_and_write": false, 00:16:31.998 "abort": true, 00:16:31.998 "seek_hole": false, 00:16:31.998 "seek_data": false, 00:16:31.998 "copy": true, 00:16:31.998 "nvme_iov_md": false 00:16:31.998 }, 00:16:31.998 "memory_domains": [ 00:16:31.998 { 00:16:31.998 "dma_device_id": "system", 00:16:31.998 "dma_device_type": 1 00:16:31.998 }, 00:16:31.998 { 00:16:31.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.998 "dma_device_type": 2 00:16:31.998 } 00:16:31.998 ], 00:16:31.998 "driver_specific": {} 00:16:31.998 }, 00:16:31.998 { 00:16:31.998 "name": "Passthru0", 00:16:31.998 "aliases": [ 00:16:31.998 "5ef5d04c-1117-5a9d-9c65-c0d73e09d897" 00:16:31.998 ], 00:16:31.998 "product_name": "passthru", 00:16:31.998 "block_size": 512, 00:16:31.998 "num_blocks": 16384, 00:16:31.998 "uuid": "5ef5d04c-1117-5a9d-9c65-c0d73e09d897", 00:16:31.998 "assigned_rate_limits": { 00:16:31.998 "rw_ios_per_sec": 0, 00:16:31.998 "rw_mbytes_per_sec": 0, 00:16:31.998 "r_mbytes_per_sec": 0, 00:16:31.998 "w_mbytes_per_sec": 0 00:16:31.998 }, 00:16:31.998 "claimed": false, 00:16:31.998 "zoned": false, 00:16:31.998 "supported_io_types": { 00:16:31.998 "read": true, 00:16:31.998 "write": true, 00:16:31.998 "unmap": true, 00:16:31.998 "flush": true, 00:16:31.998 "reset": true, 00:16:31.998 "nvme_admin": false, 00:16:31.998 "nvme_io": false, 00:16:31.998 "nvme_io_md": false, 00:16:31.998 "write_zeroes": true, 00:16:31.998 "zcopy": true, 00:16:31.998 "get_zone_info": false, 00:16:31.998 "zone_management": false, 00:16:31.998 "zone_append": false, 00:16:31.998 "compare": false, 00:16:31.998 "compare_and_write": false, 00:16:31.998 "abort": true, 00:16:31.998 "seek_hole": false, 00:16:31.998 "seek_data": false, 00:16:31.998 "copy": true, 00:16:31.998 "nvme_iov_md": false 00:16:31.998 }, 00:16:31.998 "memory_domains": [ 00:16:31.998 { 00:16:31.998 "dma_device_id": "system", 00:16:31.998 "dma_device_type": 1 00:16:31.998 }, 00:16:31.998 { 00:16:31.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.998 "dma_device_type": 2 00:16:31.998 } 00:16:31.998 ], 00:16:31.998 "driver_specific": { 00:16:31.998 "passthru": { 00:16:31.998 "name": "Passthru0", 00:16:31.998 "base_bdev_name": "Malloc0" 00:16:31.998 } 00:16:31.998 } 00:16:31.998 } 00:16:31.998 ]' 00:16:31.998 16:31:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:16:31.998 16:31:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:16:31.998 16:31:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:16:31.998 16:31:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.998 16:31:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:31.998 16:31:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.998 16:31:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:31.998 16:31:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.998 16:31:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:31.998 16:31:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.998 16:31:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:31.998 16:31:08 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.998 16:31:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:31.998 16:31:08 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.998 16:31:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:16:31.998 16:31:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:16:31.998 ************************************ 00:16:31.998 END TEST rpc_integrity 00:16:31.998 ************************************ 00:16:31.998 16:31:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:16:31.998 00:16:31.998 real 0m0.373s 00:16:31.998 user 0m0.200s 00:16:31.998 sys 0m0.064s 00:16:31.998 16:31:08 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:31.998 16:31:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:31.998 16:31:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:16:31.998 16:31:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:31.998 16:31:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:31.998 16:31:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:16:31.998 ************************************ 00:16:31.998 START TEST rpc_plugins 00:16:31.998 ************************************ 00:16:31.998 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:16:31.998 16:31:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:16:31.998 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.998 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:16:32.258 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.258 16:31:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:16:32.258 16:31:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:16:32.258 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.258 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:16:32.258 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.258 16:31:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:16:32.258 { 00:16:32.258 "name": "Malloc1", 00:16:32.258 "aliases": [ 00:16:32.258 "b72dd144-9f45-40c3-95cf-1a89a1e45b3e" 00:16:32.258 ], 00:16:32.258 "product_name": "Malloc disk", 00:16:32.258 "block_size": 4096, 00:16:32.258 "num_blocks": 256, 00:16:32.258 "uuid": "b72dd144-9f45-40c3-95cf-1a89a1e45b3e", 00:16:32.258 "assigned_rate_limits": { 00:16:32.258 "rw_ios_per_sec": 0, 00:16:32.258 "rw_mbytes_per_sec": 0, 00:16:32.258 "r_mbytes_per_sec": 0, 00:16:32.258 "w_mbytes_per_sec": 0 00:16:32.258 }, 00:16:32.258 "claimed": false, 00:16:32.258 "zoned": false, 00:16:32.258 "supported_io_types": { 00:16:32.258 "read": true, 00:16:32.258 "write": true, 00:16:32.258 "unmap": true, 00:16:32.258 "flush": true, 00:16:32.258 "reset": true, 00:16:32.258 "nvme_admin": false, 00:16:32.258 "nvme_io": false, 00:16:32.258 "nvme_io_md": false, 00:16:32.258 "write_zeroes": true, 00:16:32.258 "zcopy": true, 00:16:32.258 "get_zone_info": false, 00:16:32.258 "zone_management": false, 00:16:32.258 "zone_append": false, 00:16:32.258 "compare": false, 00:16:32.258 "compare_and_write": false, 00:16:32.258 "abort": true, 00:16:32.258 "seek_hole": false, 00:16:32.258 "seek_data": false, 00:16:32.258 "copy": true, 00:16:32.258 "nvme_iov_md": false 00:16:32.258 }, 00:16:32.258 "memory_domains": [ 00:16:32.258 { 00:16:32.258 "dma_device_id": "system", 00:16:32.258 "dma_device_type": 1 00:16:32.258 }, 00:16:32.258 { 00:16:32.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.258 "dma_device_type": 2 00:16:32.258 } 00:16:32.258 ], 00:16:32.258 "driver_specific": {} 00:16:32.258 } 00:16:32.258 ]' 00:16:32.258 16:31:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:16:32.258 16:31:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:16:32.258 16:31:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:16:32.258 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.258 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:16:32.258 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.258 16:31:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:16:32.258 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.258 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:16:32.258 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.258 16:31:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:16:32.258 16:31:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:16:32.258 ************************************ 00:16:32.258 END TEST rpc_plugins 00:16:32.258 ************************************ 00:16:32.258 16:31:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:16:32.258 00:16:32.258 real 0m0.173s 00:16:32.258 user 0m0.098s 00:16:32.258 sys 0m0.026s 00:16:32.258 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:32.258 16:31:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:16:32.258 16:31:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:16:32.258 16:31:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:32.258 16:31:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:32.258 16:31:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.258 ************************************ 00:16:32.258 START TEST rpc_trace_cmd_test 00:16:32.258 ************************************ 00:16:32.258 16:31:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:16:32.258 16:31:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:16:32.258 16:31:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:16:32.258 16:31:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.258 16:31:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.517 16:31:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.517 16:31:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:16:32.517 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57752", 00:16:32.517 "tpoint_group_mask": "0x8", 00:16:32.517 "iscsi_conn": { 00:16:32.517 "mask": "0x2", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "scsi": { 00:16:32.517 "mask": "0x4", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "bdev": { 00:16:32.517 "mask": "0x8", 00:16:32.517 "tpoint_mask": "0xffffffffffffffff" 00:16:32.517 }, 00:16:32.517 "nvmf_rdma": { 00:16:32.517 "mask": "0x10", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "nvmf_tcp": { 00:16:32.517 "mask": "0x20", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "ftl": { 00:16:32.517 "mask": "0x40", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "blobfs": { 00:16:32.517 "mask": "0x80", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "dsa": { 00:16:32.517 "mask": "0x200", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "thread": { 00:16:32.517 "mask": "0x400", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "nvme_pcie": { 00:16:32.517 "mask": "0x800", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "iaa": { 00:16:32.517 "mask": "0x1000", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "nvme_tcp": { 00:16:32.517 "mask": "0x2000", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "bdev_nvme": { 00:16:32.517 "mask": "0x4000", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "sock": { 00:16:32.517 "mask": "0x8000", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "blob": { 00:16:32.517 "mask": "0x10000", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "bdev_raid": { 00:16:32.517 "mask": "0x20000", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 }, 00:16:32.517 "scheduler": { 00:16:32.517 "mask": "0x40000", 00:16:32.517 "tpoint_mask": "0x0" 00:16:32.517 } 00:16:32.517 }' 00:16:32.517 16:31:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:16:32.517 16:31:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:16:32.517 16:31:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:16:32.517 16:31:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:16:32.517 16:31:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:16:32.517 16:31:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:16:32.517 16:31:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:16:32.517 16:31:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:16:32.517 16:31:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:16:32.517 ************************************ 00:16:32.517 END TEST rpc_trace_cmd_test 00:16:32.517 ************************************ 00:16:32.517 16:31:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:16:32.517 00:16:32.517 real 0m0.256s 00:16:32.517 user 0m0.202s 00:16:32.517 sys 0m0.044s 00:16:32.517 16:31:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:32.517 16:31:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.777 16:31:08 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:16:32.777 16:31:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:16:32.777 16:31:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:16:32.777 16:31:08 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:32.777 16:31:08 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:32.777 16:31:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:16:32.777 ************************************ 00:16:32.777 START TEST rpc_daemon_integrity 00:16:32.777 ************************************ 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:16:32.777 { 00:16:32.777 "name": "Malloc2", 00:16:32.777 "aliases": [ 00:16:32.777 "f0356901-39f4-4dd9-a77b-e27983138dec" 00:16:32.777 ], 00:16:32.777 "product_name": "Malloc disk", 00:16:32.777 "block_size": 512, 00:16:32.777 "num_blocks": 16384, 00:16:32.777 "uuid": "f0356901-39f4-4dd9-a77b-e27983138dec", 00:16:32.777 "assigned_rate_limits": { 00:16:32.777 "rw_ios_per_sec": 0, 00:16:32.777 "rw_mbytes_per_sec": 0, 00:16:32.777 "r_mbytes_per_sec": 0, 00:16:32.777 "w_mbytes_per_sec": 0 00:16:32.777 }, 00:16:32.777 "claimed": false, 00:16:32.777 "zoned": false, 00:16:32.777 "supported_io_types": { 00:16:32.777 "read": true, 00:16:32.777 "write": true, 00:16:32.777 "unmap": true, 00:16:32.777 "flush": true, 00:16:32.777 "reset": true, 00:16:32.777 "nvme_admin": false, 00:16:32.777 "nvme_io": false, 00:16:32.777 "nvme_io_md": false, 00:16:32.777 "write_zeroes": true, 00:16:32.777 "zcopy": true, 00:16:32.777 "get_zone_info": false, 00:16:32.777 "zone_management": false, 00:16:32.777 "zone_append": false, 00:16:32.777 "compare": false, 00:16:32.777 "compare_and_write": false, 00:16:32.777 "abort": true, 00:16:32.777 "seek_hole": false, 00:16:32.777 "seek_data": false, 00:16:32.777 "copy": true, 00:16:32.777 "nvme_iov_md": false 00:16:32.777 }, 00:16:32.777 "memory_domains": [ 00:16:32.777 { 00:16:32.777 "dma_device_id": "system", 00:16:32.777 "dma_device_type": 1 00:16:32.777 }, 00:16:32.777 { 00:16:32.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.777 "dma_device_type": 2 00:16:32.777 } 00:16:32.777 ], 00:16:32.777 "driver_specific": {} 00:16:32.777 } 00:16:32.777 ]' 00:16:32.777 16:31:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:16:32.777 16:31:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:16:32.777 16:31:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:16:32.777 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.777 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:32.777 [2024-10-17 16:31:09.045913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:16:32.777 [2024-10-17 16:31:09.046147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.777 [2024-10-17 16:31:09.046186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:32.777 [2024-10-17 16:31:09.046203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.777 [2024-10-17 16:31:09.049088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.777 [2024-10-17 16:31:09.049254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:16:32.777 Passthru0 00:16:32.777 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.777 16:31:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:16:32.777 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.777 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:16:33.037 { 00:16:33.037 "name": "Malloc2", 00:16:33.037 "aliases": [ 00:16:33.037 "f0356901-39f4-4dd9-a77b-e27983138dec" 00:16:33.037 ], 00:16:33.037 "product_name": "Malloc disk", 00:16:33.037 "block_size": 512, 00:16:33.037 "num_blocks": 16384, 00:16:33.037 "uuid": "f0356901-39f4-4dd9-a77b-e27983138dec", 00:16:33.037 "assigned_rate_limits": { 00:16:33.037 "rw_ios_per_sec": 0, 00:16:33.037 "rw_mbytes_per_sec": 0, 00:16:33.037 "r_mbytes_per_sec": 0, 00:16:33.037 "w_mbytes_per_sec": 0 00:16:33.037 }, 00:16:33.037 "claimed": true, 00:16:33.037 "claim_type": "exclusive_write", 00:16:33.037 "zoned": false, 00:16:33.037 "supported_io_types": { 00:16:33.037 "read": true, 00:16:33.037 "write": true, 00:16:33.037 "unmap": true, 00:16:33.037 "flush": true, 00:16:33.037 "reset": true, 00:16:33.037 "nvme_admin": false, 00:16:33.037 "nvme_io": false, 00:16:33.037 "nvme_io_md": false, 00:16:33.037 "write_zeroes": true, 00:16:33.037 "zcopy": true, 00:16:33.037 "get_zone_info": false, 00:16:33.037 "zone_management": false, 00:16:33.037 "zone_append": false, 00:16:33.037 "compare": false, 00:16:33.037 "compare_and_write": false, 00:16:33.037 "abort": true, 00:16:33.037 "seek_hole": false, 00:16:33.037 "seek_data": false, 00:16:33.037 "copy": true, 00:16:33.037 "nvme_iov_md": false 00:16:33.037 }, 00:16:33.037 "memory_domains": [ 00:16:33.037 { 00:16:33.037 "dma_device_id": "system", 00:16:33.037 "dma_device_type": 1 00:16:33.037 }, 00:16:33.037 { 00:16:33.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.037 "dma_device_type": 2 00:16:33.037 } 00:16:33.037 ], 00:16:33.037 "driver_specific": {} 00:16:33.037 }, 00:16:33.037 { 00:16:33.037 "name": "Passthru0", 00:16:33.037 "aliases": [ 00:16:33.037 "ab960996-6c84-5cc2-aa24-1ad42843d8ef" 00:16:33.037 ], 00:16:33.037 "product_name": "passthru", 00:16:33.037 "block_size": 512, 00:16:33.037 "num_blocks": 16384, 00:16:33.037 "uuid": "ab960996-6c84-5cc2-aa24-1ad42843d8ef", 00:16:33.037 "assigned_rate_limits": { 00:16:33.037 "rw_ios_per_sec": 0, 00:16:33.037 "rw_mbytes_per_sec": 0, 00:16:33.037 "r_mbytes_per_sec": 0, 00:16:33.037 "w_mbytes_per_sec": 0 00:16:33.037 }, 00:16:33.037 "claimed": false, 00:16:33.037 "zoned": false, 00:16:33.037 "supported_io_types": { 00:16:33.037 "read": true, 00:16:33.037 "write": true, 00:16:33.037 "unmap": true, 00:16:33.037 "flush": true, 00:16:33.037 "reset": true, 00:16:33.037 "nvme_admin": false, 00:16:33.037 "nvme_io": false, 00:16:33.037 "nvme_io_md": false, 00:16:33.037 "write_zeroes": true, 00:16:33.037 "zcopy": true, 00:16:33.037 "get_zone_info": false, 00:16:33.037 "zone_management": false, 00:16:33.037 "zone_append": false, 00:16:33.037 "compare": false, 00:16:33.037 "compare_and_write": false, 00:16:33.037 "abort": true, 00:16:33.037 "seek_hole": false, 00:16:33.037 "seek_data": false, 00:16:33.037 "copy": true, 00:16:33.037 "nvme_iov_md": false 00:16:33.037 }, 00:16:33.037 "memory_domains": [ 00:16:33.037 { 00:16:33.037 "dma_device_id": "system", 00:16:33.037 "dma_device_type": 1 00:16:33.037 }, 00:16:33.037 { 00:16:33.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.037 "dma_device_type": 2 00:16:33.037 } 00:16:33.037 ], 00:16:33.037 "driver_specific": { 00:16:33.037 "passthru": { 00:16:33.037 "name": "Passthru0", 00:16:33.037 "base_bdev_name": "Malloc2" 00:16:33.037 } 00:16:33.037 } 00:16:33.037 } 00:16:33.037 ]' 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:16:33.037 16:31:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:16:33.037 00:16:33.037 real 0m0.388s 00:16:33.038 user 0m0.209s 00:16:33.038 sys 0m0.069s 00:16:33.038 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.038 16:31:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:16:33.038 ************************************ 00:16:33.038 END TEST rpc_daemon_integrity 00:16:33.038 ************************************ 00:16:33.038 16:31:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:33.038 16:31:09 rpc -- rpc/rpc.sh@84 -- # killprocess 57752 00:16:33.038 16:31:09 rpc -- common/autotest_common.sh@950 -- # '[' -z 57752 ']' 00:16:33.038 16:31:09 rpc -- common/autotest_common.sh@954 -- # kill -0 57752 00:16:33.038 16:31:09 rpc -- common/autotest_common.sh@955 -- # uname 00:16:33.038 16:31:09 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:33.296 16:31:09 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57752 00:16:33.296 killing process with pid 57752 00:16:33.296 16:31:09 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:33.296 16:31:09 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:33.296 16:31:09 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57752' 00:16:33.296 16:31:09 rpc -- common/autotest_common.sh@969 -- # kill 57752 00:16:33.296 16:31:09 rpc -- common/autotest_common.sh@974 -- # wait 57752 00:16:35.833 ************************************ 00:16:35.833 END TEST rpc 00:16:35.833 ************************************ 00:16:35.833 00:16:35.833 real 0m5.701s 00:16:35.833 user 0m6.274s 00:16:35.833 sys 0m1.042s 00:16:35.833 16:31:11 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.833 16:31:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:16:35.833 16:31:12 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:16:35.833 16:31:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:35.833 16:31:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.833 16:31:12 -- common/autotest_common.sh@10 -- # set +x 00:16:35.833 ************************************ 00:16:35.833 START TEST skip_rpc 00:16:35.833 ************************************ 00:16:35.833 16:31:12 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:16:36.093 * Looking for test storage... 00:16:36.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:16:36.093 16:31:12 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:36.093 16:31:12 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:36.093 16:31:12 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:36.093 16:31:12 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@345 -- # : 1 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:36.093 16:31:12 skip_rpc -- scripts/common.sh@368 -- # return 0 00:16:36.093 16:31:12 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:36.093 16:31:12 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:36.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.093 --rc genhtml_branch_coverage=1 00:16:36.093 --rc genhtml_function_coverage=1 00:16:36.093 --rc genhtml_legend=1 00:16:36.093 --rc geninfo_all_blocks=1 00:16:36.093 --rc geninfo_unexecuted_blocks=1 00:16:36.093 00:16:36.093 ' 00:16:36.093 16:31:12 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:36.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.093 --rc genhtml_branch_coverage=1 00:16:36.093 --rc genhtml_function_coverage=1 00:16:36.093 --rc genhtml_legend=1 00:16:36.093 --rc geninfo_all_blocks=1 00:16:36.093 --rc geninfo_unexecuted_blocks=1 00:16:36.093 00:16:36.093 ' 00:16:36.093 16:31:12 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:36.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.093 --rc genhtml_branch_coverage=1 00:16:36.093 --rc genhtml_function_coverage=1 00:16:36.093 --rc genhtml_legend=1 00:16:36.093 --rc geninfo_all_blocks=1 00:16:36.093 --rc geninfo_unexecuted_blocks=1 00:16:36.093 00:16:36.093 ' 00:16:36.093 16:31:12 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:36.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:36.093 --rc genhtml_branch_coverage=1 00:16:36.093 --rc genhtml_function_coverage=1 00:16:36.093 --rc genhtml_legend=1 00:16:36.093 --rc geninfo_all_blocks=1 00:16:36.093 --rc geninfo_unexecuted_blocks=1 00:16:36.093 00:16:36.093 ' 00:16:36.093 16:31:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:16:36.093 16:31:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:16:36.093 16:31:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:16:36.093 16:31:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:36.093 16:31:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:36.093 16:31:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.093 ************************************ 00:16:36.093 START TEST skip_rpc 00:16:36.093 ************************************ 00:16:36.093 16:31:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:16:36.093 16:31:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57981 00:16:36.093 16:31:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:16:36.093 16:31:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:16:36.093 16:31:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:16:36.352 [2024-10-17 16:31:12.465057] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:16:36.352 [2024-10-17 16:31:12.465192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57981 ] 00:16:36.352 [2024-10-17 16:31:12.641299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.611 [2024-10-17 16:31:12.775585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.901 16:31:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:16:41.901 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:16:41.901 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:16:41.901 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:41.901 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.901 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57981 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57981 ']' 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57981 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57981 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:41.902 killing process with pid 57981 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57981' 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57981 00:16:41.902 16:31:17 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57981 00:16:43.805 ************************************ 00:16:43.805 END TEST skip_rpc 00:16:43.805 ************************************ 00:16:43.805 00:16:43.805 real 0m7.561s 00:16:43.805 user 0m7.034s 00:16:43.805 sys 0m0.442s 00:16:43.805 16:31:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:43.805 16:31:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.805 16:31:19 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:16:43.805 16:31:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:43.805 16:31:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:43.805 16:31:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.805 ************************************ 00:16:43.805 START TEST skip_rpc_with_json 00:16:43.805 ************************************ 00:16:43.805 16:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:16:43.805 16:31:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:16:43.805 16:31:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58096 00:16:43.805 16:31:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:43.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.805 16:31:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:16:43.805 16:31:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58096 00:16:43.805 16:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 58096 ']' 00:16:43.805 16:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.806 16:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:43.806 16:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.806 16:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:43.806 16:31:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:43.806 [2024-10-17 16:31:20.094257] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:16:43.806 [2024-10-17 16:31:20.094409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58096 ] 00:16:44.064 [2024-10-17 16:31:20.267141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.323 [2024-10-17 16:31:20.385477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:45.264 [2024-10-17 16:31:21.254343] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:16:45.264 request: 00:16:45.264 { 00:16:45.264 "trtype": "tcp", 00:16:45.264 "method": "nvmf_get_transports", 00:16:45.264 "req_id": 1 00:16:45.264 } 00:16:45.264 Got JSON-RPC error response 00:16:45.264 response: 00:16:45.264 { 00:16:45.264 "code": -19, 00:16:45.264 "message": "No such device" 00:16:45.264 } 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:45.264 [2024-10-17 16:31:21.270404] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.264 16:31:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:16:45.264 { 00:16:45.264 "subsystems": [ 00:16:45.264 { 00:16:45.264 "subsystem": "fsdev", 00:16:45.264 "config": [ 00:16:45.264 { 00:16:45.264 "method": "fsdev_set_opts", 00:16:45.264 "params": { 00:16:45.264 "fsdev_io_pool_size": 65535, 00:16:45.264 "fsdev_io_cache_size": 256 00:16:45.264 } 00:16:45.264 } 00:16:45.264 ] 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "subsystem": "keyring", 00:16:45.264 "config": [] 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "subsystem": "iobuf", 00:16:45.264 "config": [ 00:16:45.264 { 00:16:45.264 "method": "iobuf_set_options", 00:16:45.264 "params": { 00:16:45.264 "small_pool_count": 8192, 00:16:45.264 "large_pool_count": 1024, 00:16:45.264 "small_bufsize": 8192, 00:16:45.264 "large_bufsize": 135168 00:16:45.264 } 00:16:45.264 } 00:16:45.264 ] 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "subsystem": "sock", 00:16:45.264 "config": [ 00:16:45.264 { 00:16:45.264 "method": "sock_set_default_impl", 00:16:45.264 "params": { 00:16:45.264 "impl_name": "posix" 00:16:45.264 } 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "method": "sock_impl_set_options", 00:16:45.264 "params": { 00:16:45.264 "impl_name": "ssl", 00:16:45.264 "recv_buf_size": 4096, 00:16:45.264 "send_buf_size": 4096, 00:16:45.264 "enable_recv_pipe": true, 00:16:45.264 "enable_quickack": false, 00:16:45.264 "enable_placement_id": 0, 00:16:45.264 "enable_zerocopy_send_server": true, 00:16:45.264 "enable_zerocopy_send_client": false, 00:16:45.264 "zerocopy_threshold": 0, 00:16:45.264 "tls_version": 0, 00:16:45.264 "enable_ktls": false 00:16:45.264 } 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "method": "sock_impl_set_options", 00:16:45.264 "params": { 00:16:45.264 "impl_name": "posix", 00:16:45.264 "recv_buf_size": 2097152, 00:16:45.264 "send_buf_size": 2097152, 00:16:45.264 "enable_recv_pipe": true, 00:16:45.264 "enable_quickack": false, 00:16:45.264 "enable_placement_id": 0, 00:16:45.264 "enable_zerocopy_send_server": true, 00:16:45.264 "enable_zerocopy_send_client": false, 00:16:45.264 "zerocopy_threshold": 0, 00:16:45.264 "tls_version": 0, 00:16:45.264 "enable_ktls": false 00:16:45.264 } 00:16:45.264 } 00:16:45.264 ] 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "subsystem": "vmd", 00:16:45.264 "config": [] 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "subsystem": "accel", 00:16:45.264 "config": [ 00:16:45.264 { 00:16:45.264 "method": "accel_set_options", 00:16:45.264 "params": { 00:16:45.264 "small_cache_size": 128, 00:16:45.264 "large_cache_size": 16, 00:16:45.264 "task_count": 2048, 00:16:45.264 "sequence_count": 2048, 00:16:45.264 "buf_count": 2048 00:16:45.264 } 00:16:45.264 } 00:16:45.264 ] 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "subsystem": "bdev", 00:16:45.264 "config": [ 00:16:45.264 { 00:16:45.264 "method": "bdev_set_options", 00:16:45.264 "params": { 00:16:45.264 "bdev_io_pool_size": 65535, 00:16:45.264 "bdev_io_cache_size": 256, 00:16:45.264 "bdev_auto_examine": true, 00:16:45.264 "iobuf_small_cache_size": 128, 00:16:45.264 "iobuf_large_cache_size": 16 00:16:45.264 } 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "method": "bdev_raid_set_options", 00:16:45.264 "params": { 00:16:45.264 "process_window_size_kb": 1024, 00:16:45.264 "process_max_bandwidth_mb_sec": 0 00:16:45.264 } 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "method": "bdev_iscsi_set_options", 00:16:45.264 "params": { 00:16:45.264 "timeout_sec": 30 00:16:45.264 } 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "method": "bdev_nvme_set_options", 00:16:45.264 "params": { 00:16:45.264 "action_on_timeout": "none", 00:16:45.264 "timeout_us": 0, 00:16:45.264 "timeout_admin_us": 0, 00:16:45.264 "keep_alive_timeout_ms": 10000, 00:16:45.264 "arbitration_burst": 0, 00:16:45.264 "low_priority_weight": 0, 00:16:45.264 "medium_priority_weight": 0, 00:16:45.264 "high_priority_weight": 0, 00:16:45.264 "nvme_adminq_poll_period_us": 10000, 00:16:45.264 "nvme_ioq_poll_period_us": 0, 00:16:45.264 "io_queue_requests": 0, 00:16:45.264 "delay_cmd_submit": true, 00:16:45.264 "transport_retry_count": 4, 00:16:45.264 "bdev_retry_count": 3, 00:16:45.264 "transport_ack_timeout": 0, 00:16:45.264 "ctrlr_loss_timeout_sec": 0, 00:16:45.264 "reconnect_delay_sec": 0, 00:16:45.264 "fast_io_fail_timeout_sec": 0, 00:16:45.264 "disable_auto_failback": false, 00:16:45.264 "generate_uuids": false, 00:16:45.264 "transport_tos": 0, 00:16:45.264 "nvme_error_stat": false, 00:16:45.264 "rdma_srq_size": 0, 00:16:45.264 "io_path_stat": false, 00:16:45.264 "allow_accel_sequence": false, 00:16:45.264 "rdma_max_cq_size": 0, 00:16:45.264 "rdma_cm_event_timeout_ms": 0, 00:16:45.264 "dhchap_digests": [ 00:16:45.264 "sha256", 00:16:45.264 "sha384", 00:16:45.264 "sha512" 00:16:45.264 ], 00:16:45.264 "dhchap_dhgroups": [ 00:16:45.264 "null", 00:16:45.264 "ffdhe2048", 00:16:45.264 "ffdhe3072", 00:16:45.264 "ffdhe4096", 00:16:45.264 "ffdhe6144", 00:16:45.264 "ffdhe8192" 00:16:45.264 ] 00:16:45.264 } 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "method": "bdev_nvme_set_hotplug", 00:16:45.264 "params": { 00:16:45.264 "period_us": 100000, 00:16:45.264 "enable": false 00:16:45.264 } 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "method": "bdev_wait_for_examine" 00:16:45.264 } 00:16:45.264 ] 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "subsystem": "scsi", 00:16:45.264 "config": null 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "subsystem": "scheduler", 00:16:45.264 "config": [ 00:16:45.264 { 00:16:45.264 "method": "framework_set_scheduler", 00:16:45.264 "params": { 00:16:45.264 "name": "static" 00:16:45.264 } 00:16:45.264 } 00:16:45.264 ] 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "subsystem": "vhost_scsi", 00:16:45.264 "config": [] 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "subsystem": "vhost_blk", 00:16:45.264 "config": [] 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "subsystem": "ublk", 00:16:45.264 "config": [] 00:16:45.264 }, 00:16:45.264 { 00:16:45.264 "subsystem": "nbd", 00:16:45.264 "config": [] 00:16:45.264 }, 00:16:45.265 { 00:16:45.265 "subsystem": "nvmf", 00:16:45.265 "config": [ 00:16:45.265 { 00:16:45.265 "method": "nvmf_set_config", 00:16:45.265 "params": { 00:16:45.265 "discovery_filter": "match_any", 00:16:45.265 "admin_cmd_passthru": { 00:16:45.265 "identify_ctrlr": false 00:16:45.265 }, 00:16:45.265 "dhchap_digests": [ 00:16:45.265 "sha256", 00:16:45.265 "sha384", 00:16:45.265 "sha512" 00:16:45.265 ], 00:16:45.265 "dhchap_dhgroups": [ 00:16:45.265 "null", 00:16:45.265 "ffdhe2048", 00:16:45.265 "ffdhe3072", 00:16:45.265 "ffdhe4096", 00:16:45.265 "ffdhe6144", 00:16:45.265 "ffdhe8192" 00:16:45.265 ] 00:16:45.265 } 00:16:45.265 }, 00:16:45.265 { 00:16:45.265 "method": "nvmf_set_max_subsystems", 00:16:45.265 "params": { 00:16:45.265 "max_subsystems": 1024 00:16:45.265 } 00:16:45.265 }, 00:16:45.265 { 00:16:45.265 "method": "nvmf_set_crdt", 00:16:45.265 "params": { 00:16:45.265 "crdt1": 0, 00:16:45.265 "crdt2": 0, 00:16:45.265 "crdt3": 0 00:16:45.265 } 00:16:45.265 }, 00:16:45.265 { 00:16:45.265 "method": "nvmf_create_transport", 00:16:45.265 "params": { 00:16:45.265 "trtype": "TCP", 00:16:45.265 "max_queue_depth": 128, 00:16:45.265 "max_io_qpairs_per_ctrlr": 127, 00:16:45.265 "in_capsule_data_size": 4096, 00:16:45.265 "max_io_size": 131072, 00:16:45.265 "io_unit_size": 131072, 00:16:45.265 "max_aq_depth": 128, 00:16:45.265 "num_shared_buffers": 511, 00:16:45.265 "buf_cache_size": 4294967295, 00:16:45.265 "dif_insert_or_strip": false, 00:16:45.265 "zcopy": false, 00:16:45.265 "c2h_success": true, 00:16:45.265 "sock_priority": 0, 00:16:45.265 "abort_timeout_sec": 1, 00:16:45.265 "ack_timeout": 0, 00:16:45.265 "data_wr_pool_size": 0 00:16:45.265 } 00:16:45.265 } 00:16:45.265 ] 00:16:45.265 }, 00:16:45.265 { 00:16:45.265 "subsystem": "iscsi", 00:16:45.265 "config": [ 00:16:45.265 { 00:16:45.265 "method": "iscsi_set_options", 00:16:45.265 "params": { 00:16:45.265 "node_base": "iqn.2016-06.io.spdk", 00:16:45.265 "max_sessions": 128, 00:16:45.265 "max_connections_per_session": 2, 00:16:45.265 "max_queue_depth": 64, 00:16:45.265 "default_time2wait": 2, 00:16:45.265 "default_time2retain": 20, 00:16:45.265 "first_burst_length": 8192, 00:16:45.265 "immediate_data": true, 00:16:45.265 "allow_duplicated_isid": false, 00:16:45.265 "error_recovery_level": 0, 00:16:45.265 "nop_timeout": 60, 00:16:45.265 "nop_in_interval": 30, 00:16:45.265 "disable_chap": false, 00:16:45.265 "require_chap": false, 00:16:45.265 "mutual_chap": false, 00:16:45.265 "chap_group": 0, 00:16:45.265 "max_large_datain_per_connection": 64, 00:16:45.265 "max_r2t_per_connection": 4, 00:16:45.265 "pdu_pool_size": 36864, 00:16:45.265 "immediate_data_pool_size": 16384, 00:16:45.265 "data_out_pool_size": 2048 00:16:45.265 } 00:16:45.265 } 00:16:45.265 ] 00:16:45.265 } 00:16:45.265 ] 00:16:45.265 } 00:16:45.265 16:31:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:45.265 16:31:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58096 00:16:45.265 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58096 ']' 00:16:45.265 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58096 00:16:45.265 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:16:45.265 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:45.265 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58096 00:16:45.265 killing process with pid 58096 00:16:45.265 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:45.265 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:45.265 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58096' 00:16:45.265 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58096 00:16:45.265 16:31:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58096 00:16:47.801 16:31:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:16:47.801 16:31:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58142 00:16:47.801 16:31:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:16:53.072 16:31:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58142 00:16:53.072 16:31:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 58142 ']' 00:16:53.072 16:31:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 58142 00:16:53.072 16:31:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:16:53.072 16:31:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:53.072 16:31:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58142 00:16:53.072 killing process with pid 58142 00:16:53.072 16:31:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:53.072 16:31:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:53.072 16:31:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58142' 00:16:53.072 16:31:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 58142 00:16:53.072 16:31:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 58142 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:16:55.605 00:16:55.605 real 0m11.348s 00:16:55.605 user 0m10.832s 00:16:55.605 sys 0m0.898s 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:55.605 ************************************ 00:16:55.605 END TEST skip_rpc_with_json 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:16:55.605 ************************************ 00:16:55.605 16:31:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:16:55.605 16:31:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:55.605 16:31:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:55.605 16:31:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.605 ************************************ 00:16:55.605 START TEST skip_rpc_with_delay 00:16:55.605 ************************************ 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:16:55.605 [2024-10-17 16:31:31.520270] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:55.605 00:16:55.605 real 0m0.180s 00:16:55.605 user 0m0.082s 00:16:55.605 sys 0m0.095s 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:55.605 ************************************ 00:16:55.605 END TEST skip_rpc_with_delay 00:16:55.605 ************************************ 00:16:55.605 16:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:16:55.605 16:31:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:16:55.605 16:31:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:16:55.605 16:31:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:16:55.605 16:31:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:55.605 16:31:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:55.605 16:31:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:55.605 ************************************ 00:16:55.605 START TEST exit_on_failed_rpc_init 00:16:55.605 ************************************ 00:16:55.605 16:31:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:16:55.605 16:31:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58280 00:16:55.605 16:31:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:16:55.605 16:31:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58280 00:16:55.605 16:31:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 58280 ']' 00:16:55.605 16:31:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.605 16:31:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:55.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.605 16:31:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.605 16:31:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:55.605 16:31:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:16:55.605 [2024-10-17 16:31:31.768545] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:16:55.605 [2024-10-17 16:31:31.768678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58280 ] 00:16:55.864 [2024-10-17 16:31:31.941750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.864 [2024-10-17 16:31:32.066536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:16:56.796 16:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:16:56.796 [2024-10-17 16:31:33.038661] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:16:56.796 [2024-10-17 16:31:33.038833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58298 ] 00:16:57.054 [2024-10-17 16:31:33.203599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.311 [2024-10-17 16:31:33.362443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.311 [2024-10-17 16:31:33.363313] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:57.311 [2024-10-17 16:31:33.363869] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:57.311 [2024-10-17 16:31:33.364283] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58280 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 58280 ']' 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 58280 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58280 00:16:57.569 killing process with pid 58280 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58280' 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 58280 00:16:57.569 16:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 58280 00:17:00.100 00:17:00.100 real 0m4.426s 00:17:00.100 user 0m4.743s 00:17:00.100 sys 0m0.645s 00:17:00.100 16:31:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.100 16:31:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:17:00.100 ************************************ 00:17:00.100 END TEST exit_on_failed_rpc_init 00:17:00.100 ************************************ 00:17:00.100 16:31:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:00.100 ************************************ 00:17:00.100 END TEST skip_rpc 00:17:00.100 ************************************ 00:17:00.100 00:17:00.100 real 0m24.076s 00:17:00.100 user 0m22.943s 00:17:00.100 sys 0m2.381s 00:17:00.100 16:31:36 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.100 16:31:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.100 16:31:36 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:17:00.100 16:31:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:00.100 16:31:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.100 16:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:00.100 ************************************ 00:17:00.100 START TEST rpc_client 00:17:00.100 ************************************ 00:17:00.100 16:31:36 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:17:00.100 * Looking for test storage... 00:17:00.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:17:00.100 16:31:36 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:00.100 16:31:36 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:17:00.100 16:31:36 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:00.359 16:31:36 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@345 -- # : 1 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@353 -- # local d=1 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@355 -- # echo 1 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@353 -- # local d=2 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@355 -- # echo 2 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.359 16:31:36 rpc_client -- scripts/common.sh@368 -- # return 0 00:17:00.359 16:31:36 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.359 16:31:36 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:00.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.359 --rc genhtml_branch_coverage=1 00:17:00.359 --rc genhtml_function_coverage=1 00:17:00.359 --rc genhtml_legend=1 00:17:00.359 --rc geninfo_all_blocks=1 00:17:00.359 --rc geninfo_unexecuted_blocks=1 00:17:00.359 00:17:00.359 ' 00:17:00.359 16:31:36 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:00.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.359 --rc genhtml_branch_coverage=1 00:17:00.359 --rc genhtml_function_coverage=1 00:17:00.359 --rc genhtml_legend=1 00:17:00.359 --rc geninfo_all_blocks=1 00:17:00.359 --rc geninfo_unexecuted_blocks=1 00:17:00.359 00:17:00.359 ' 00:17:00.359 16:31:36 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:00.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.359 --rc genhtml_branch_coverage=1 00:17:00.359 --rc genhtml_function_coverage=1 00:17:00.359 --rc genhtml_legend=1 00:17:00.359 --rc geninfo_all_blocks=1 00:17:00.359 --rc geninfo_unexecuted_blocks=1 00:17:00.359 00:17:00.359 ' 00:17:00.359 16:31:36 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:00.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.359 --rc genhtml_branch_coverage=1 00:17:00.359 --rc genhtml_function_coverage=1 00:17:00.359 --rc genhtml_legend=1 00:17:00.359 --rc geninfo_all_blocks=1 00:17:00.359 --rc geninfo_unexecuted_blocks=1 00:17:00.359 00:17:00.359 ' 00:17:00.359 16:31:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:17:00.359 OK 00:17:00.359 16:31:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:17:00.359 00:17:00.359 real 0m0.304s 00:17:00.359 user 0m0.158s 00:17:00.359 sys 0m0.163s 00:17:00.359 16:31:36 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.359 16:31:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:17:00.359 ************************************ 00:17:00.359 END TEST rpc_client 00:17:00.359 ************************************ 00:17:00.359 16:31:36 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:17:00.359 16:31:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:00.359 16:31:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.359 16:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:00.359 ************************************ 00:17:00.359 START TEST json_config 00:17:00.359 ************************************ 00:17:00.359 16:31:36 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:17:00.617 16:31:36 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:00.617 16:31:36 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:17:00.617 16:31:36 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:00.617 16:31:36 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:00.617 16:31:36 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.617 16:31:36 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.617 16:31:36 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.617 16:31:36 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.617 16:31:36 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.617 16:31:36 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.617 16:31:36 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.617 16:31:36 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.617 16:31:36 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.617 16:31:36 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.617 16:31:36 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.617 16:31:36 json_config -- scripts/common.sh@344 -- # case "$op" in 00:17:00.617 16:31:36 json_config -- scripts/common.sh@345 -- # : 1 00:17:00.617 16:31:36 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.617 16:31:36 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.617 16:31:36 json_config -- scripts/common.sh@365 -- # decimal 1 00:17:00.617 16:31:36 json_config -- scripts/common.sh@353 -- # local d=1 00:17:00.617 16:31:36 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.617 16:31:36 json_config -- scripts/common.sh@355 -- # echo 1 00:17:00.617 16:31:36 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.617 16:31:36 json_config -- scripts/common.sh@366 -- # decimal 2 00:17:00.617 16:31:36 json_config -- scripts/common.sh@353 -- # local d=2 00:17:00.617 16:31:36 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.617 16:31:36 json_config -- scripts/common.sh@355 -- # echo 2 00:17:00.617 16:31:36 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.617 16:31:36 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.617 16:31:36 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.617 16:31:36 json_config -- scripts/common.sh@368 -- # return 0 00:17:00.617 16:31:36 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.618 16:31:36 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:00.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.618 --rc genhtml_branch_coverage=1 00:17:00.618 --rc genhtml_function_coverage=1 00:17:00.618 --rc genhtml_legend=1 00:17:00.618 --rc geninfo_all_blocks=1 00:17:00.618 --rc geninfo_unexecuted_blocks=1 00:17:00.618 00:17:00.618 ' 00:17:00.618 16:31:36 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:00.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.618 --rc genhtml_branch_coverage=1 00:17:00.618 --rc genhtml_function_coverage=1 00:17:00.618 --rc genhtml_legend=1 00:17:00.618 --rc geninfo_all_blocks=1 00:17:00.618 --rc geninfo_unexecuted_blocks=1 00:17:00.618 00:17:00.618 ' 00:17:00.618 16:31:36 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:00.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.618 --rc genhtml_branch_coverage=1 00:17:00.618 --rc genhtml_function_coverage=1 00:17:00.618 --rc genhtml_legend=1 00:17:00.618 --rc geninfo_all_blocks=1 00:17:00.618 --rc geninfo_unexecuted_blocks=1 00:17:00.618 00:17:00.618 ' 00:17:00.618 16:31:36 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:00.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.618 --rc genhtml_branch_coverage=1 00:17:00.618 --rc genhtml_function_coverage=1 00:17:00.618 --rc genhtml_legend=1 00:17:00.618 --rc geninfo_all_blocks=1 00:17:00.618 --rc geninfo_unexecuted_blocks=1 00:17:00.618 00:17:00.618 ' 00:17:00.618 16:31:36 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ba9c5064-9e78-405f-b6ca-bee3ef04967c 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ba9c5064-9e78-405f-b6ca-bee3ef04967c 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:00.618 16:31:36 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:17:00.618 16:31:36 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.618 16:31:36 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.618 16:31:36 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.618 16:31:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.618 16:31:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.618 16:31:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.618 16:31:36 json_config -- paths/export.sh@5 -- # export PATH 00:17:00.618 16:31:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@51 -- # : 0 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:00.618 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:00.618 16:31:36 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:00.618 WARNING: No tests are enabled so not running JSON configuration tests 00:17:00.618 16:31:36 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:17:00.618 16:31:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:17:00.618 16:31:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:17:00.618 16:31:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:17:00.618 16:31:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:17:00.618 16:31:36 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:17:00.618 16:31:36 json_config -- json_config/json_config.sh@28 -- # exit 0 00:17:00.618 ************************************ 00:17:00.618 END TEST json_config 00:17:00.618 ************************************ 00:17:00.618 00:17:00.618 real 0m0.238s 00:17:00.618 user 0m0.135s 00:17:00.618 sys 0m0.102s 00:17:00.618 16:31:36 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.618 16:31:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:17:00.618 16:31:36 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:17:00.618 16:31:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:00.618 16:31:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.618 16:31:36 -- common/autotest_common.sh@10 -- # set +x 00:17:00.618 ************************************ 00:17:00.618 START TEST json_config_extra_key 00:17:00.618 ************************************ 00:17:00.618 16:31:36 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:17:00.878 16:31:36 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:00.878 16:31:36 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:17:00.878 16:31:36 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:00.878 16:31:37 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:17:00.878 16:31:37 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.878 16:31:37 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:00.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.878 --rc genhtml_branch_coverage=1 00:17:00.878 --rc genhtml_function_coverage=1 00:17:00.878 --rc genhtml_legend=1 00:17:00.878 --rc geninfo_all_blocks=1 00:17:00.878 --rc geninfo_unexecuted_blocks=1 00:17:00.878 00:17:00.878 ' 00:17:00.878 16:31:37 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:00.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.878 --rc genhtml_branch_coverage=1 00:17:00.878 --rc genhtml_function_coverage=1 00:17:00.878 --rc genhtml_legend=1 00:17:00.878 --rc geninfo_all_blocks=1 00:17:00.878 --rc geninfo_unexecuted_blocks=1 00:17:00.878 00:17:00.878 ' 00:17:00.878 16:31:37 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:00.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.878 --rc genhtml_branch_coverage=1 00:17:00.878 --rc genhtml_function_coverage=1 00:17:00.878 --rc genhtml_legend=1 00:17:00.878 --rc geninfo_all_blocks=1 00:17:00.878 --rc geninfo_unexecuted_blocks=1 00:17:00.878 00:17:00.878 ' 00:17:00.878 16:31:37 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:00.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.878 --rc genhtml_branch_coverage=1 00:17:00.878 --rc genhtml_function_coverage=1 00:17:00.878 --rc genhtml_legend=1 00:17:00.878 --rc geninfo_all_blocks=1 00:17:00.878 --rc geninfo_unexecuted_blocks=1 00:17:00.878 00:17:00.878 ' 00:17:00.878 16:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ba9c5064-9e78-405f-b6ca-bee3ef04967c 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ba9c5064-9e78-405f-b6ca-bee3ef04967c 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.878 16:31:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.878 16:31:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.878 16:31:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.878 16:31:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.878 16:31:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:17:00.878 16:31:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:00.878 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:00.878 16:31:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:00.878 16:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:17:00.878 16:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:17:00.879 16:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:17:00.879 16:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:17:00.879 16:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:17:00.879 16:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:17:00.879 16:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:17:00.879 16:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:17:00.879 16:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:17:00.879 16:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:17:00.879 16:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:17:00.879 INFO: launching applications... 00:17:00.879 16:31:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:17:00.879 16:31:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:17:00.879 16:31:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:17:00.879 16:31:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:17:00.879 16:31:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:17:00.879 16:31:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:17:00.879 16:31:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:00.879 16:31:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:00.879 16:31:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58508 00:17:00.879 16:31:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:17:00.879 Waiting for target to run... 00:17:00.879 16:31:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58508 /var/tmp/spdk_tgt.sock 00:17:00.879 16:31:37 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:17:00.879 16:31:37 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58508 ']' 00:17:00.879 16:31:37 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:17:00.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:17:00.879 16:31:37 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:00.879 16:31:37 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:17:00.879 16:31:37 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:00.879 16:31:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:17:01.137 [2024-10-17 16:31:37.250002] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:17:01.137 [2024-10-17 16:31:37.250363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58508 ] 00:17:01.396 [2024-10-17 16:31:37.639921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.655 [2024-10-17 16:31:37.754532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.221 00:17:02.221 INFO: shutting down applications... 00:17:02.221 16:31:38 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.221 16:31:38 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:17:02.221 16:31:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:17:02.221 16:31:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:17:02.221 16:31:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:17:02.221 16:31:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:17:02.221 16:31:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:17:02.221 16:31:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58508 ]] 00:17:02.221 16:31:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58508 00:17:02.221 16:31:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:17:02.221 16:31:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:02.221 16:31:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58508 00:17:02.221 16:31:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:02.790 16:31:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:02.790 16:31:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:02.790 16:31:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58508 00:17:02.790 16:31:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:03.355 16:31:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:03.355 16:31:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:03.355 16:31:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58508 00:17:03.355 16:31:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:03.923 16:31:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:03.923 16:31:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:03.923 16:31:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58508 00:17:03.923 16:31:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:04.490 16:31:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:04.490 16:31:40 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:04.490 16:31:40 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58508 00:17:04.490 16:31:40 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:04.749 16:31:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:04.749 16:31:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:04.749 16:31:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58508 00:17:04.749 16:31:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:17:05.317 16:31:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:17:05.317 16:31:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:05.317 16:31:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58508 00:17:05.317 16:31:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:17:05.317 16:31:41 json_config_extra_key -- json_config/common.sh@43 -- # break 00:17:05.317 SPDK target shutdown done 00:17:05.317 16:31:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:17:05.317 16:31:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:17:05.317 Success 00:17:05.317 16:31:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:17:05.317 ************************************ 00:17:05.317 END TEST json_config_extra_key 00:17:05.317 ************************************ 00:17:05.317 00:17:05.317 real 0m4.623s 00:17:05.317 user 0m4.041s 00:17:05.317 sys 0m0.617s 00:17:05.317 16:31:41 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:05.317 16:31:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:17:05.317 16:31:41 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:05.317 16:31:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:05.317 16:31:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:05.317 16:31:41 -- common/autotest_common.sh@10 -- # set +x 00:17:05.317 ************************************ 00:17:05.317 START TEST alias_rpc 00:17:05.317 ************************************ 00:17:05.317 16:31:41 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:05.576 * Looking for test storage... 00:17:05.576 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:17:05.576 16:31:41 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:05.576 16:31:41 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:17:05.576 16:31:41 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:05.576 16:31:41 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@345 -- # : 1 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.577 16:31:41 alias_rpc -- scripts/common.sh@368 -- # return 0 00:17:05.577 16:31:41 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.577 16:31:41 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:05.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.577 --rc genhtml_branch_coverage=1 00:17:05.577 --rc genhtml_function_coverage=1 00:17:05.577 --rc genhtml_legend=1 00:17:05.577 --rc geninfo_all_blocks=1 00:17:05.577 --rc geninfo_unexecuted_blocks=1 00:17:05.577 00:17:05.577 ' 00:17:05.577 16:31:41 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:05.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.577 --rc genhtml_branch_coverage=1 00:17:05.577 --rc genhtml_function_coverage=1 00:17:05.577 --rc genhtml_legend=1 00:17:05.577 --rc geninfo_all_blocks=1 00:17:05.577 --rc geninfo_unexecuted_blocks=1 00:17:05.577 00:17:05.577 ' 00:17:05.577 16:31:41 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:05.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.577 --rc genhtml_branch_coverage=1 00:17:05.577 --rc genhtml_function_coverage=1 00:17:05.577 --rc genhtml_legend=1 00:17:05.577 --rc geninfo_all_blocks=1 00:17:05.577 --rc geninfo_unexecuted_blocks=1 00:17:05.577 00:17:05.577 ' 00:17:05.577 16:31:41 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:05.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.577 --rc genhtml_branch_coverage=1 00:17:05.577 --rc genhtml_function_coverage=1 00:17:05.577 --rc genhtml_legend=1 00:17:05.577 --rc geninfo_all_blocks=1 00:17:05.577 --rc geninfo_unexecuted_blocks=1 00:17:05.577 00:17:05.577 ' 00:17:05.577 16:31:41 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:05.577 16:31:41 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58624 00:17:05.577 16:31:41 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:05.577 16:31:41 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58624 00:17:05.577 16:31:41 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58624 ']' 00:17:05.577 16:31:41 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.577 16:31:41 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:05.577 16:31:41 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.577 16:31:41 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:05.577 16:31:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:05.836 [2024-10-17 16:31:41.931469] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:17:05.836 [2024-10-17 16:31:41.931779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58624 ] 00:17:05.836 [2024-10-17 16:31:42.102248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.096 [2024-10-17 16:31:42.222642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.032 16:31:43 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.032 16:31:43 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:07.032 16:31:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:17:07.032 16:31:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58624 00:17:07.032 16:31:43 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58624 ']' 00:17:07.032 16:31:43 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58624 00:17:07.032 16:31:43 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:17:07.032 16:31:43 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:07.032 16:31:43 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58624 00:17:07.291 killing process with pid 58624 00:17:07.291 16:31:43 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:07.291 16:31:43 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:07.291 16:31:43 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58624' 00:17:07.291 16:31:43 alias_rpc -- common/autotest_common.sh@969 -- # kill 58624 00:17:07.291 16:31:43 alias_rpc -- common/autotest_common.sh@974 -- # wait 58624 00:17:09.824 ************************************ 00:17:09.824 END TEST alias_rpc 00:17:09.824 ************************************ 00:17:09.824 00:17:09.824 real 0m4.296s 00:17:09.824 user 0m4.221s 00:17:09.824 sys 0m0.624s 00:17:09.824 16:31:45 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:09.824 16:31:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.824 16:31:45 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:17:09.824 16:31:45 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:17:09.824 16:31:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:09.824 16:31:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:09.824 16:31:45 -- common/autotest_common.sh@10 -- # set +x 00:17:09.824 ************************************ 00:17:09.824 START TEST spdkcli_tcp 00:17:09.824 ************************************ 00:17:09.824 16:31:45 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:17:09.824 * Looking for test storage... 00:17:09.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:09.825 16:31:46 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:09.825 16:31:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:17:09.825 16:31:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:10.084 16:31:46 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.084 16:31:46 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:17:10.084 16:31:46 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.084 16:31:46 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:10.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.084 --rc genhtml_branch_coverage=1 00:17:10.084 --rc genhtml_function_coverage=1 00:17:10.084 --rc genhtml_legend=1 00:17:10.084 --rc geninfo_all_blocks=1 00:17:10.084 --rc geninfo_unexecuted_blocks=1 00:17:10.084 00:17:10.084 ' 00:17:10.084 16:31:46 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:10.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.084 --rc genhtml_branch_coverage=1 00:17:10.084 --rc genhtml_function_coverage=1 00:17:10.084 --rc genhtml_legend=1 00:17:10.084 --rc geninfo_all_blocks=1 00:17:10.084 --rc geninfo_unexecuted_blocks=1 00:17:10.084 00:17:10.084 ' 00:17:10.084 16:31:46 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:10.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.084 --rc genhtml_branch_coverage=1 00:17:10.084 --rc genhtml_function_coverage=1 00:17:10.084 --rc genhtml_legend=1 00:17:10.084 --rc geninfo_all_blocks=1 00:17:10.084 --rc geninfo_unexecuted_blocks=1 00:17:10.084 00:17:10.084 ' 00:17:10.084 16:31:46 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:10.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.084 --rc genhtml_branch_coverage=1 00:17:10.084 --rc genhtml_function_coverage=1 00:17:10.084 --rc genhtml_legend=1 00:17:10.084 --rc geninfo_all_blocks=1 00:17:10.084 --rc geninfo_unexecuted_blocks=1 00:17:10.084 00:17:10.084 ' 00:17:10.084 16:31:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:10.084 16:31:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:10.084 16:31:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:10.084 16:31:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:17:10.084 16:31:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:17:10.084 16:31:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.084 16:31:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:17:10.084 16:31:46 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:10.084 16:31:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:10.084 16:31:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58732 00:17:10.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.084 16:31:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58732 00:17:10.084 16:31:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:10.084 16:31:46 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58732 ']' 00:17:10.084 16:31:46 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.084 16:31:46 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:10.084 16:31:46 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.084 16:31:46 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:10.084 16:31:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:10.084 [2024-10-17 16:31:46.335194] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:17:10.084 [2024-10-17 16:31:46.335631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58732 ] 00:17:10.342 [2024-10-17 16:31:46.514661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:10.601 [2024-10-17 16:31:46.647547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.601 [2024-10-17 16:31:46.647573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.533 16:31:47 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:11.533 16:31:47 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:17:11.533 16:31:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58750 00:17:11.533 16:31:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:17:11.533 16:31:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:17:11.533 [ 00:17:11.533 "bdev_malloc_delete", 00:17:11.533 "bdev_malloc_create", 00:17:11.533 "bdev_null_resize", 00:17:11.533 "bdev_null_delete", 00:17:11.533 "bdev_null_create", 00:17:11.533 "bdev_nvme_cuse_unregister", 00:17:11.533 "bdev_nvme_cuse_register", 00:17:11.533 "bdev_opal_new_user", 00:17:11.533 "bdev_opal_set_lock_state", 00:17:11.533 "bdev_opal_delete", 00:17:11.533 "bdev_opal_get_info", 00:17:11.533 "bdev_opal_create", 00:17:11.533 "bdev_nvme_opal_revert", 00:17:11.533 "bdev_nvme_opal_init", 00:17:11.533 "bdev_nvme_send_cmd", 00:17:11.533 "bdev_nvme_set_keys", 00:17:11.533 "bdev_nvme_get_path_iostat", 00:17:11.533 "bdev_nvme_get_mdns_discovery_info", 00:17:11.533 "bdev_nvme_stop_mdns_discovery", 00:17:11.533 "bdev_nvme_start_mdns_discovery", 00:17:11.533 "bdev_nvme_set_multipath_policy", 00:17:11.533 "bdev_nvme_set_preferred_path", 00:17:11.533 "bdev_nvme_get_io_paths", 00:17:11.533 "bdev_nvme_remove_error_injection", 00:17:11.533 "bdev_nvme_add_error_injection", 00:17:11.533 "bdev_nvme_get_discovery_info", 00:17:11.533 "bdev_nvme_stop_discovery", 00:17:11.533 "bdev_nvme_start_discovery", 00:17:11.533 "bdev_nvme_get_controller_health_info", 00:17:11.533 "bdev_nvme_disable_controller", 00:17:11.533 "bdev_nvme_enable_controller", 00:17:11.533 "bdev_nvme_reset_controller", 00:17:11.533 "bdev_nvme_get_transport_statistics", 00:17:11.533 "bdev_nvme_apply_firmware", 00:17:11.533 "bdev_nvme_detach_controller", 00:17:11.533 "bdev_nvme_get_controllers", 00:17:11.533 "bdev_nvme_attach_controller", 00:17:11.533 "bdev_nvme_set_hotplug", 00:17:11.533 "bdev_nvme_set_options", 00:17:11.533 "bdev_passthru_delete", 00:17:11.533 "bdev_passthru_create", 00:17:11.533 "bdev_lvol_set_parent_bdev", 00:17:11.533 "bdev_lvol_set_parent", 00:17:11.533 "bdev_lvol_check_shallow_copy", 00:17:11.533 "bdev_lvol_start_shallow_copy", 00:17:11.533 "bdev_lvol_grow_lvstore", 00:17:11.533 "bdev_lvol_get_lvols", 00:17:11.533 "bdev_lvol_get_lvstores", 00:17:11.533 "bdev_lvol_delete", 00:17:11.533 "bdev_lvol_set_read_only", 00:17:11.533 "bdev_lvol_resize", 00:17:11.533 "bdev_lvol_decouple_parent", 00:17:11.533 "bdev_lvol_inflate", 00:17:11.533 "bdev_lvol_rename", 00:17:11.533 "bdev_lvol_clone_bdev", 00:17:11.533 "bdev_lvol_clone", 00:17:11.533 "bdev_lvol_snapshot", 00:17:11.533 "bdev_lvol_create", 00:17:11.533 "bdev_lvol_delete_lvstore", 00:17:11.533 "bdev_lvol_rename_lvstore", 00:17:11.533 "bdev_lvol_create_lvstore", 00:17:11.533 "bdev_raid_set_options", 00:17:11.533 "bdev_raid_remove_base_bdev", 00:17:11.533 "bdev_raid_add_base_bdev", 00:17:11.533 "bdev_raid_delete", 00:17:11.533 "bdev_raid_create", 00:17:11.533 "bdev_raid_get_bdevs", 00:17:11.533 "bdev_error_inject_error", 00:17:11.533 "bdev_error_delete", 00:17:11.533 "bdev_error_create", 00:17:11.533 "bdev_split_delete", 00:17:11.533 "bdev_split_create", 00:17:11.533 "bdev_delay_delete", 00:17:11.533 "bdev_delay_create", 00:17:11.533 "bdev_delay_update_latency", 00:17:11.533 "bdev_zone_block_delete", 00:17:11.533 "bdev_zone_block_create", 00:17:11.533 "blobfs_create", 00:17:11.533 "blobfs_detect", 00:17:11.533 "blobfs_set_cache_size", 00:17:11.533 "bdev_xnvme_delete", 00:17:11.533 "bdev_xnvme_create", 00:17:11.533 "bdev_aio_delete", 00:17:11.533 "bdev_aio_rescan", 00:17:11.533 "bdev_aio_create", 00:17:11.533 "bdev_ftl_set_property", 00:17:11.533 "bdev_ftl_get_properties", 00:17:11.533 "bdev_ftl_get_stats", 00:17:11.533 "bdev_ftl_unmap", 00:17:11.533 "bdev_ftl_unload", 00:17:11.533 "bdev_ftl_delete", 00:17:11.533 "bdev_ftl_load", 00:17:11.533 "bdev_ftl_create", 00:17:11.533 "bdev_virtio_attach_controller", 00:17:11.533 "bdev_virtio_scsi_get_devices", 00:17:11.533 "bdev_virtio_detach_controller", 00:17:11.533 "bdev_virtio_blk_set_hotplug", 00:17:11.533 "bdev_iscsi_delete", 00:17:11.533 "bdev_iscsi_create", 00:17:11.533 "bdev_iscsi_set_options", 00:17:11.533 "accel_error_inject_error", 00:17:11.533 "ioat_scan_accel_module", 00:17:11.533 "dsa_scan_accel_module", 00:17:11.533 "iaa_scan_accel_module", 00:17:11.533 "keyring_file_remove_key", 00:17:11.533 "keyring_file_add_key", 00:17:11.533 "keyring_linux_set_options", 00:17:11.533 "fsdev_aio_delete", 00:17:11.533 "fsdev_aio_create", 00:17:11.533 "iscsi_get_histogram", 00:17:11.533 "iscsi_enable_histogram", 00:17:11.533 "iscsi_set_options", 00:17:11.533 "iscsi_get_auth_groups", 00:17:11.533 "iscsi_auth_group_remove_secret", 00:17:11.533 "iscsi_auth_group_add_secret", 00:17:11.533 "iscsi_delete_auth_group", 00:17:11.533 "iscsi_create_auth_group", 00:17:11.533 "iscsi_set_discovery_auth", 00:17:11.533 "iscsi_get_options", 00:17:11.533 "iscsi_target_node_request_logout", 00:17:11.533 "iscsi_target_node_set_redirect", 00:17:11.533 "iscsi_target_node_set_auth", 00:17:11.533 "iscsi_target_node_add_lun", 00:17:11.533 "iscsi_get_stats", 00:17:11.533 "iscsi_get_connections", 00:17:11.533 "iscsi_portal_group_set_auth", 00:17:11.533 "iscsi_start_portal_group", 00:17:11.533 "iscsi_delete_portal_group", 00:17:11.533 "iscsi_create_portal_group", 00:17:11.533 "iscsi_get_portal_groups", 00:17:11.533 "iscsi_delete_target_node", 00:17:11.533 "iscsi_target_node_remove_pg_ig_maps", 00:17:11.533 "iscsi_target_node_add_pg_ig_maps", 00:17:11.533 "iscsi_create_target_node", 00:17:11.533 "iscsi_get_target_nodes", 00:17:11.533 "iscsi_delete_initiator_group", 00:17:11.533 "iscsi_initiator_group_remove_initiators", 00:17:11.533 "iscsi_initiator_group_add_initiators", 00:17:11.533 "iscsi_create_initiator_group", 00:17:11.533 "iscsi_get_initiator_groups", 00:17:11.533 "nvmf_set_crdt", 00:17:11.533 "nvmf_set_config", 00:17:11.533 "nvmf_set_max_subsystems", 00:17:11.533 "nvmf_stop_mdns_prr", 00:17:11.533 "nvmf_publish_mdns_prr", 00:17:11.533 "nvmf_subsystem_get_listeners", 00:17:11.533 "nvmf_subsystem_get_qpairs", 00:17:11.533 "nvmf_subsystem_get_controllers", 00:17:11.533 "nvmf_get_stats", 00:17:11.533 "nvmf_get_transports", 00:17:11.533 "nvmf_create_transport", 00:17:11.533 "nvmf_get_targets", 00:17:11.533 "nvmf_delete_target", 00:17:11.533 "nvmf_create_target", 00:17:11.533 "nvmf_subsystem_allow_any_host", 00:17:11.533 "nvmf_subsystem_set_keys", 00:17:11.533 "nvmf_subsystem_remove_host", 00:17:11.533 "nvmf_subsystem_add_host", 00:17:11.533 "nvmf_ns_remove_host", 00:17:11.533 "nvmf_ns_add_host", 00:17:11.533 "nvmf_subsystem_remove_ns", 00:17:11.533 "nvmf_subsystem_set_ns_ana_group", 00:17:11.533 "nvmf_subsystem_add_ns", 00:17:11.533 "nvmf_subsystem_listener_set_ana_state", 00:17:11.533 "nvmf_discovery_get_referrals", 00:17:11.533 "nvmf_discovery_remove_referral", 00:17:11.533 "nvmf_discovery_add_referral", 00:17:11.533 "nvmf_subsystem_remove_listener", 00:17:11.533 "nvmf_subsystem_add_listener", 00:17:11.533 "nvmf_delete_subsystem", 00:17:11.533 "nvmf_create_subsystem", 00:17:11.533 "nvmf_get_subsystems", 00:17:11.533 "env_dpdk_get_mem_stats", 00:17:11.533 "nbd_get_disks", 00:17:11.533 "nbd_stop_disk", 00:17:11.533 "nbd_start_disk", 00:17:11.533 "ublk_recover_disk", 00:17:11.533 "ublk_get_disks", 00:17:11.533 "ublk_stop_disk", 00:17:11.533 "ublk_start_disk", 00:17:11.533 "ublk_destroy_target", 00:17:11.533 "ublk_create_target", 00:17:11.534 "virtio_blk_create_transport", 00:17:11.534 "virtio_blk_get_transports", 00:17:11.534 "vhost_controller_set_coalescing", 00:17:11.534 "vhost_get_controllers", 00:17:11.534 "vhost_delete_controller", 00:17:11.534 "vhost_create_blk_controller", 00:17:11.534 "vhost_scsi_controller_remove_target", 00:17:11.534 "vhost_scsi_controller_add_target", 00:17:11.534 "vhost_start_scsi_controller", 00:17:11.534 "vhost_create_scsi_controller", 00:17:11.534 "thread_set_cpumask", 00:17:11.534 "scheduler_set_options", 00:17:11.534 "framework_get_governor", 00:17:11.534 "framework_get_scheduler", 00:17:11.534 "framework_set_scheduler", 00:17:11.534 "framework_get_reactors", 00:17:11.534 "thread_get_io_channels", 00:17:11.534 "thread_get_pollers", 00:17:11.534 "thread_get_stats", 00:17:11.534 "framework_monitor_context_switch", 00:17:11.534 "spdk_kill_instance", 00:17:11.534 "log_enable_timestamps", 00:17:11.534 "log_get_flags", 00:17:11.534 "log_clear_flag", 00:17:11.534 "log_set_flag", 00:17:11.534 "log_get_level", 00:17:11.534 "log_set_level", 00:17:11.534 "log_get_print_level", 00:17:11.534 "log_set_print_level", 00:17:11.534 "framework_enable_cpumask_locks", 00:17:11.534 "framework_disable_cpumask_locks", 00:17:11.534 "framework_wait_init", 00:17:11.534 "framework_start_init", 00:17:11.534 "scsi_get_devices", 00:17:11.534 "bdev_get_histogram", 00:17:11.534 "bdev_enable_histogram", 00:17:11.534 "bdev_set_qos_limit", 00:17:11.534 "bdev_set_qd_sampling_period", 00:17:11.534 "bdev_get_bdevs", 00:17:11.534 "bdev_reset_iostat", 00:17:11.534 "bdev_get_iostat", 00:17:11.534 "bdev_examine", 00:17:11.534 "bdev_wait_for_examine", 00:17:11.534 "bdev_set_options", 00:17:11.534 "accel_get_stats", 00:17:11.534 "accel_set_options", 00:17:11.534 "accel_set_driver", 00:17:11.534 "accel_crypto_key_destroy", 00:17:11.534 "accel_crypto_keys_get", 00:17:11.534 "accel_crypto_key_create", 00:17:11.534 "accel_assign_opc", 00:17:11.534 "accel_get_module_info", 00:17:11.534 "accel_get_opc_assignments", 00:17:11.534 "vmd_rescan", 00:17:11.534 "vmd_remove_device", 00:17:11.534 "vmd_enable", 00:17:11.534 "sock_get_default_impl", 00:17:11.534 "sock_set_default_impl", 00:17:11.534 "sock_impl_set_options", 00:17:11.534 "sock_impl_get_options", 00:17:11.534 "iobuf_get_stats", 00:17:11.534 "iobuf_set_options", 00:17:11.534 "keyring_get_keys", 00:17:11.534 "framework_get_pci_devices", 00:17:11.534 "framework_get_config", 00:17:11.534 "framework_get_subsystems", 00:17:11.534 "fsdev_set_opts", 00:17:11.534 "fsdev_get_opts", 00:17:11.534 "trace_get_info", 00:17:11.534 "trace_get_tpoint_group_mask", 00:17:11.534 "trace_disable_tpoint_group", 00:17:11.534 "trace_enable_tpoint_group", 00:17:11.534 "trace_clear_tpoint_mask", 00:17:11.534 "trace_set_tpoint_mask", 00:17:11.534 "notify_get_notifications", 00:17:11.534 "notify_get_types", 00:17:11.534 "spdk_get_version", 00:17:11.534 "rpc_get_methods" 00:17:11.534 ] 00:17:11.534 16:31:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:17:11.534 16:31:47 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:11.534 16:31:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:11.793 16:31:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:11.793 16:31:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58732 00:17:11.793 16:31:47 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58732 ']' 00:17:11.793 16:31:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58732 00:17:11.793 16:31:47 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:17:11.793 16:31:47 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:11.793 16:31:47 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58732 00:17:11.793 killing process with pid 58732 00:17:11.793 16:31:47 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:11.793 16:31:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:11.793 16:31:47 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58732' 00:17:11.793 16:31:47 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58732 00:17:11.793 16:31:47 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58732 00:17:14.392 ************************************ 00:17:14.392 END TEST spdkcli_tcp 00:17:14.392 ************************************ 00:17:14.392 00:17:14.392 real 0m4.393s 00:17:14.392 user 0m7.782s 00:17:14.392 sys 0m0.692s 00:17:14.392 16:31:50 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:14.392 16:31:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:14.392 16:31:50 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:17:14.392 16:31:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:14.392 16:31:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:14.392 16:31:50 -- common/autotest_common.sh@10 -- # set +x 00:17:14.392 ************************************ 00:17:14.392 START TEST dpdk_mem_utility 00:17:14.392 ************************************ 00:17:14.392 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:17:14.392 * Looking for test storage... 00:17:14.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:17:14.392 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:14.392 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:17:14.392 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:14.392 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.392 16:31:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:17:14.393 16:31:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:17:14.393 16:31:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.393 16:31:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:17:14.393 16:31:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.393 16:31:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.393 16:31:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.393 16:31:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:17:14.393 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.393 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:14.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.393 --rc genhtml_branch_coverage=1 00:17:14.393 --rc genhtml_function_coverage=1 00:17:14.393 --rc genhtml_legend=1 00:17:14.393 --rc geninfo_all_blocks=1 00:17:14.393 --rc geninfo_unexecuted_blocks=1 00:17:14.393 00:17:14.393 ' 00:17:14.393 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:14.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.393 --rc genhtml_branch_coverage=1 00:17:14.393 --rc genhtml_function_coverage=1 00:17:14.393 --rc genhtml_legend=1 00:17:14.393 --rc geninfo_all_blocks=1 00:17:14.393 --rc geninfo_unexecuted_blocks=1 00:17:14.393 00:17:14.393 ' 00:17:14.393 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:14.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.393 --rc genhtml_branch_coverage=1 00:17:14.393 --rc genhtml_function_coverage=1 00:17:14.393 --rc genhtml_legend=1 00:17:14.393 --rc geninfo_all_blocks=1 00:17:14.393 --rc geninfo_unexecuted_blocks=1 00:17:14.393 00:17:14.393 ' 00:17:14.393 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:14.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.393 --rc genhtml_branch_coverage=1 00:17:14.393 --rc genhtml_function_coverage=1 00:17:14.393 --rc genhtml_legend=1 00:17:14.393 --rc geninfo_all_blocks=1 00:17:14.393 --rc geninfo_unexecuted_blocks=1 00:17:14.393 00:17:14.393 ' 00:17:14.393 16:31:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:17:14.393 16:31:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:14.393 16:31:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58855 00:17:14.393 16:31:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58855 00:17:14.393 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58855 ']' 00:17:14.393 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.393 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:14.393 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.393 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:14.393 16:31:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:14.652 [2024-10-17 16:31:50.780957] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:17:14.652 [2024-10-17 16:31:50.781307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58855 ] 00:17:14.909 [2024-10-17 16:31:50.956963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.909 [2024-10-17 16:31:51.083643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.843 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:15.843 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:17:15.843 16:31:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:17:15.843 16:31:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:17:15.843 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.843 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:15.843 { 00:17:15.843 "filename": "/tmp/spdk_mem_dump.txt" 00:17:15.843 } 00:17:15.843 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.843 16:31:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:17:15.843 DPDK memory size 816.000000 MiB in 1 heap(s) 00:17:15.843 1 heaps totaling size 816.000000 MiB 00:17:15.843 size: 816.000000 MiB heap id: 0 00:17:15.843 end heaps---------- 00:17:15.843 9 mempools totaling size 595.772034 MiB 00:17:15.843 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:17:15.843 size: 158.602051 MiB name: PDU_data_out_Pool 00:17:15.843 size: 92.545471 MiB name: bdev_io_58855 00:17:15.843 size: 50.003479 MiB name: msgpool_58855 00:17:15.843 size: 36.509338 MiB name: fsdev_io_58855 00:17:15.843 size: 21.763794 MiB name: PDU_Pool 00:17:15.843 size: 19.513306 MiB name: SCSI_TASK_Pool 00:17:15.843 size: 4.133484 MiB name: evtpool_58855 00:17:15.843 size: 0.026123 MiB name: Session_Pool 00:17:15.843 end mempools------- 00:17:15.843 6 memzones totaling size 4.142822 MiB 00:17:15.843 size: 1.000366 MiB name: RG_ring_0_58855 00:17:15.843 size: 1.000366 MiB name: RG_ring_1_58855 00:17:15.843 size: 1.000366 MiB name: RG_ring_4_58855 00:17:15.843 size: 1.000366 MiB name: RG_ring_5_58855 00:17:15.843 size: 0.125366 MiB name: RG_ring_2_58855 00:17:15.843 size: 0.015991 MiB name: RG_ring_3_58855 00:17:15.843 end memzones------- 00:17:15.843 16:31:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:17:16.103 heap id: 0 total size: 816.000000 MiB number of busy elements: 320 number of free elements: 18 00:17:16.103 list of free elements. size: 16.790161 MiB 00:17:16.103 element at address: 0x200006400000 with size: 1.995972 MiB 00:17:16.103 element at address: 0x20000a600000 with size: 1.995972 MiB 00:17:16.103 element at address: 0x200003e00000 with size: 1.991028 MiB 00:17:16.103 element at address: 0x200018d00040 with size: 0.999939 MiB 00:17:16.103 element at address: 0x200019100040 with size: 0.999939 MiB 00:17:16.103 element at address: 0x200019200000 with size: 0.999084 MiB 00:17:16.103 element at address: 0x200031e00000 with size: 0.994324 MiB 00:17:16.103 element at address: 0x200000400000 with size: 0.992004 MiB 00:17:16.103 element at address: 0x200018a00000 with size: 0.959656 MiB 00:17:16.103 element at address: 0x200019500040 with size: 0.936401 MiB 00:17:16.103 element at address: 0x200000200000 with size: 0.716980 MiB 00:17:16.103 element at address: 0x20001ac00000 with size: 0.560730 MiB 00:17:16.103 element at address: 0x200000c00000 with size: 0.490173 MiB 00:17:16.103 element at address: 0x200018e00000 with size: 0.487976 MiB 00:17:16.103 element at address: 0x200019600000 with size: 0.485413 MiB 00:17:16.103 element at address: 0x200012c00000 with size: 0.443237 MiB 00:17:16.103 element at address: 0x200028000000 with size: 0.390442 MiB 00:17:16.103 element at address: 0x200000800000 with size: 0.350891 MiB 00:17:16.103 list of standard malloc elements. size: 199.288940 MiB 00:17:16.103 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:17:16.103 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:17:16.103 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:17:16.103 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:17:16.103 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:17:16.103 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:17:16.103 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:17:16.103 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:17:16.103 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:17:16.103 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:17:16.103 element at address: 0x200012bff040 with size: 0.000305 MiB 00:17:16.103 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:17:16.103 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:17:16.103 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:17:16.103 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:17:16.103 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:17:16.103 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:17:16.103 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:17:16.103 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:17:16.103 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:17:16.103 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:17:16.103 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:17:16.104 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200000cff000 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012bff180 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012bff280 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012bff380 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012bff480 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012bff580 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012bff680 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012bff780 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012bff880 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012bff980 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012c71780 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012c71880 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012c71980 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012c72080 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012c72180 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:17:16.104 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:17:16.104 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:17:16.104 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:17:16.105 element at address: 0x200028063f40 with size: 0.000244 MiB 00:17:16.105 element at address: 0x200028064040 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806af80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806b080 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806b180 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806b280 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806b380 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806b480 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806b580 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806b680 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806b780 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806b880 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806b980 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806be80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806c080 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806c180 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806c280 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806c380 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806c480 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806c580 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806c680 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806c780 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806c880 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806c980 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806d080 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806d180 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806d280 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806d380 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806d480 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806d580 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806d680 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806d780 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806d880 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806d980 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806da80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806db80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806de80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806df80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806e080 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806e180 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806e280 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806e380 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806e480 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806e580 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806e680 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806e780 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806e880 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806e980 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806f080 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806f180 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806f280 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806f380 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806f480 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806f580 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806f680 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806f780 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806f880 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806f980 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:17:16.105 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:17:16.105 list of memzone associated elements. size: 599.920898 MiB 00:17:16.105 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:17:16.105 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:17:16.105 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:17:16.105 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:17:16.105 element at address: 0x200012df4740 with size: 92.045105 MiB 00:17:16.105 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58855_0 00:17:16.105 element at address: 0x200000dff340 with size: 48.003113 MiB 00:17:16.105 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58855_0 00:17:16.105 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:17:16.105 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58855_0 00:17:16.105 element at address: 0x2000197be900 with size: 20.255615 MiB 00:17:16.105 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:17:16.105 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:17:16.105 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:17:16.105 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:17:16.105 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58855_0 00:17:16.105 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:17:16.105 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58855 00:17:16.105 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:17:16.105 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58855 00:17:16.105 element at address: 0x200018efde00 with size: 1.008179 MiB 00:17:16.105 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:17:16.105 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:17:16.105 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:17:16.105 element at address: 0x200018afde00 with size: 1.008179 MiB 00:17:16.105 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:17:16.105 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:17:16.105 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:17:16.105 element at address: 0x200000cff100 with size: 1.000549 MiB 00:17:16.105 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58855 00:17:16.105 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:17:16.105 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58855 00:17:16.105 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:17:16.105 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58855 00:17:16.106 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:17:16.106 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58855 00:17:16.106 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:17:16.106 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58855 00:17:16.106 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:17:16.106 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58855 00:17:16.106 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:17:16.106 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:17:16.106 element at address: 0x200012c72280 with size: 0.500549 MiB 00:17:16.106 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:17:16.106 element at address: 0x20001967c440 with size: 0.250549 MiB 00:17:16.106 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:17:16.106 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:17:16.106 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58855 00:17:16.106 element at address: 0x20000085df80 with size: 0.125549 MiB 00:17:16.106 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58855 00:17:16.106 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:17:16.106 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:17:16.106 element at address: 0x200028064140 with size: 0.023804 MiB 00:17:16.106 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:17:16.106 element at address: 0x200000859d40 with size: 0.016174 MiB 00:17:16.106 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58855 00:17:16.106 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:17:16.106 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:17:16.106 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:17:16.106 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58855 00:17:16.106 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:17:16.106 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58855 00:17:16.106 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:17:16.106 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58855 00:17:16.106 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:17:16.106 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:17:16.106 16:31:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:17:16.106 16:31:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58855 00:17:16.106 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58855 ']' 00:17:16.106 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58855 00:17:16.106 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:17:16.106 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:16.106 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58855 00:17:16.106 killing process with pid 58855 00:17:16.106 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:16.106 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:16.106 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58855' 00:17:16.106 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58855 00:17:16.106 16:31:52 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58855 00:17:18.689 00:17:18.689 real 0m4.293s 00:17:18.689 user 0m4.237s 00:17:18.689 sys 0m0.617s 00:17:18.689 ************************************ 00:17:18.689 END TEST dpdk_mem_utility 00:17:18.689 ************************************ 00:17:18.689 16:31:54 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:18.689 16:31:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:17:18.689 16:31:54 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:17:18.689 16:31:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:18.689 16:31:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:18.689 16:31:54 -- common/autotest_common.sh@10 -- # set +x 00:17:18.689 ************************************ 00:17:18.689 START TEST event 00:17:18.689 ************************************ 00:17:18.689 16:31:54 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:17:18.689 * Looking for test storage... 00:17:18.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:17:18.689 16:31:54 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:18.689 16:31:54 event -- common/autotest_common.sh@1691 -- # lcov --version 00:17:18.689 16:31:54 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:18.948 16:31:55 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:18.948 16:31:55 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:18.948 16:31:55 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:18.948 16:31:55 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:18.948 16:31:55 event -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.948 16:31:55 event -- scripts/common.sh@336 -- # read -ra ver1 00:17:18.948 16:31:55 event -- scripts/common.sh@337 -- # IFS=.-: 00:17:18.948 16:31:55 event -- scripts/common.sh@337 -- # read -ra ver2 00:17:18.948 16:31:55 event -- scripts/common.sh@338 -- # local 'op=<' 00:17:18.948 16:31:55 event -- scripts/common.sh@340 -- # ver1_l=2 00:17:18.948 16:31:55 event -- scripts/common.sh@341 -- # ver2_l=1 00:17:18.948 16:31:55 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:18.948 16:31:55 event -- scripts/common.sh@344 -- # case "$op" in 00:17:18.948 16:31:55 event -- scripts/common.sh@345 -- # : 1 00:17:18.948 16:31:55 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:18.948 16:31:55 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.948 16:31:55 event -- scripts/common.sh@365 -- # decimal 1 00:17:18.948 16:31:55 event -- scripts/common.sh@353 -- # local d=1 00:17:18.948 16:31:55 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.948 16:31:55 event -- scripts/common.sh@355 -- # echo 1 00:17:18.948 16:31:55 event -- scripts/common.sh@365 -- # ver1[v]=1 00:17:18.948 16:31:55 event -- scripts/common.sh@366 -- # decimal 2 00:17:18.948 16:31:55 event -- scripts/common.sh@353 -- # local d=2 00:17:18.948 16:31:55 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.948 16:31:55 event -- scripts/common.sh@355 -- # echo 2 00:17:18.948 16:31:55 event -- scripts/common.sh@366 -- # ver2[v]=2 00:17:18.948 16:31:55 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:18.948 16:31:55 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:18.948 16:31:55 event -- scripts/common.sh@368 -- # return 0 00:17:18.948 16:31:55 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.948 16:31:55 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.948 --rc genhtml_branch_coverage=1 00:17:18.948 --rc genhtml_function_coverage=1 00:17:18.948 --rc genhtml_legend=1 00:17:18.948 --rc geninfo_all_blocks=1 00:17:18.948 --rc geninfo_unexecuted_blocks=1 00:17:18.948 00:17:18.948 ' 00:17:18.948 16:31:55 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.948 --rc genhtml_branch_coverage=1 00:17:18.948 --rc genhtml_function_coverage=1 00:17:18.948 --rc genhtml_legend=1 00:17:18.948 --rc geninfo_all_blocks=1 00:17:18.948 --rc geninfo_unexecuted_blocks=1 00:17:18.948 00:17:18.948 ' 00:17:18.948 16:31:55 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.948 --rc genhtml_branch_coverage=1 00:17:18.948 --rc genhtml_function_coverage=1 00:17:18.948 --rc genhtml_legend=1 00:17:18.948 --rc geninfo_all_blocks=1 00:17:18.948 --rc geninfo_unexecuted_blocks=1 00:17:18.948 00:17:18.948 ' 00:17:18.948 16:31:55 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.948 --rc genhtml_branch_coverage=1 00:17:18.948 --rc genhtml_function_coverage=1 00:17:18.948 --rc genhtml_legend=1 00:17:18.948 --rc geninfo_all_blocks=1 00:17:18.948 --rc geninfo_unexecuted_blocks=1 00:17:18.948 00:17:18.948 ' 00:17:18.948 16:31:55 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:18.948 16:31:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:17:18.948 16:31:55 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:17:18.948 16:31:55 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:17:18.948 16:31:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:18.948 16:31:55 event -- common/autotest_common.sh@10 -- # set +x 00:17:18.948 ************************************ 00:17:18.948 START TEST event_perf 00:17:18.948 ************************************ 00:17:18.948 16:31:55 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:17:18.948 Running I/O for 1 seconds...[2024-10-17 16:31:55.098013] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:17:18.948 [2024-10-17 16:31:55.098239] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58968 ] 00:17:19.207 [2024-10-17 16:31:55.272074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:19.207 [2024-10-17 16:31:55.399963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.207 [2024-10-17 16:31:55.400142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.207 [2024-10-17 16:31:55.400263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:19.207 [2024-10-17 16:31:55.400345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.583 Running I/O for 1 seconds... 00:17:20.583 lcore 0: 197626 00:17:20.583 lcore 1: 197627 00:17:20.583 lcore 2: 197627 00:17:20.583 lcore 3: 197627 00:17:20.583 done. 00:17:20.583 00:17:20.583 real 0m1.605s 00:17:20.583 user 0m4.342s 00:17:20.583 sys 0m0.140s 00:17:20.583 16:31:56 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:20.583 ************************************ 00:17:20.583 END TEST event_perf 00:17:20.583 ************************************ 00:17:20.583 16:31:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:17:20.583 16:31:56 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:17:20.583 16:31:56 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:20.583 16:31:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:20.583 16:31:56 event -- common/autotest_common.sh@10 -- # set +x 00:17:20.583 ************************************ 00:17:20.583 START TEST event_reactor 00:17:20.583 ************************************ 00:17:20.583 16:31:56 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:17:20.583 [2024-10-17 16:31:56.780029] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:17:20.583 [2024-10-17 16:31:56.780152] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59007 ] 00:17:20.840 [2024-10-17 16:31:56.952345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.840 [2024-10-17 16:31:57.071814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.212 test_start 00:17:22.212 oneshot 00:17:22.212 tick 100 00:17:22.212 tick 100 00:17:22.212 tick 250 00:17:22.212 tick 100 00:17:22.212 tick 100 00:17:22.212 tick 100 00:17:22.212 tick 250 00:17:22.212 tick 500 00:17:22.212 tick 100 00:17:22.212 tick 100 00:17:22.212 tick 250 00:17:22.212 tick 100 00:17:22.212 tick 100 00:17:22.212 test_end 00:17:22.212 00:17:22.212 real 0m1.575s 00:17:22.212 user 0m1.352s 00:17:22.212 sys 0m0.114s 00:17:22.212 ************************************ 00:17:22.212 END TEST event_reactor 00:17:22.212 ************************************ 00:17:22.212 16:31:58 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.212 16:31:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:17:22.212 16:31:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:17:22.212 16:31:58 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:22.212 16:31:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.212 16:31:58 event -- common/autotest_common.sh@10 -- # set +x 00:17:22.212 ************************************ 00:17:22.212 START TEST event_reactor_perf 00:17:22.212 ************************************ 00:17:22.212 16:31:58 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:17:22.212 [2024-10-17 16:31:58.431530] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:17:22.213 [2024-10-17 16:31:58.431644] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59044 ] 00:17:22.471 [2024-10-17 16:31:58.605308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.471 [2024-10-17 16:31:58.727134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.847 test_start 00:17:23.847 test_end 00:17:23.847 Performance: 365579 events per second 00:17:23.847 00:17:23.847 real 0m1.587s 00:17:23.847 user 0m1.366s 00:17:23.847 sys 0m0.112s 00:17:23.847 16:31:59 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:23.847 ************************************ 00:17:23.847 END TEST event_reactor_perf 00:17:23.847 ************************************ 00:17:23.847 16:31:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:17:23.847 16:32:00 event -- event/event.sh@49 -- # uname -s 00:17:23.847 16:32:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:17:23.847 16:32:00 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:17:23.847 16:32:00 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:23.847 16:32:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:23.847 16:32:00 event -- common/autotest_common.sh@10 -- # set +x 00:17:23.847 ************************************ 00:17:23.847 START TEST event_scheduler 00:17:23.847 ************************************ 00:17:23.847 16:32:00 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:17:24.107 * Looking for test storage... 00:17:24.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:17:24.107 16:32:00 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:24.107 16:32:00 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:17:24.107 16:32:00 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:24.107 16:32:00 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:24.107 16:32:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.107 16:32:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:17:24.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.108 16:32:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:17:24.108 16:32:00 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.108 16:32:00 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:24.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.108 --rc genhtml_branch_coverage=1 00:17:24.108 --rc genhtml_function_coverage=1 00:17:24.108 --rc genhtml_legend=1 00:17:24.108 --rc geninfo_all_blocks=1 00:17:24.108 --rc geninfo_unexecuted_blocks=1 00:17:24.108 00:17:24.108 ' 00:17:24.108 16:32:00 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:24.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.108 --rc genhtml_branch_coverage=1 00:17:24.108 --rc genhtml_function_coverage=1 00:17:24.108 --rc genhtml_legend=1 00:17:24.108 --rc geninfo_all_blocks=1 00:17:24.108 --rc geninfo_unexecuted_blocks=1 00:17:24.108 00:17:24.108 ' 00:17:24.108 16:32:00 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:24.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.108 --rc genhtml_branch_coverage=1 00:17:24.108 --rc genhtml_function_coverage=1 00:17:24.108 --rc genhtml_legend=1 00:17:24.108 --rc geninfo_all_blocks=1 00:17:24.108 --rc geninfo_unexecuted_blocks=1 00:17:24.108 00:17:24.108 ' 00:17:24.108 16:32:00 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:24.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.108 --rc genhtml_branch_coverage=1 00:17:24.108 --rc genhtml_function_coverage=1 00:17:24.108 --rc genhtml_legend=1 00:17:24.108 --rc geninfo_all_blocks=1 00:17:24.108 --rc geninfo_unexecuted_blocks=1 00:17:24.108 00:17:24.108 ' 00:17:24.108 16:32:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:17:24.108 16:32:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59120 00:17:24.108 16:32:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:17:24.108 16:32:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59120 00:17:24.108 16:32:00 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 59120 ']' 00:17:24.108 16:32:00 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.108 16:32:00 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.108 16:32:00 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.108 16:32:00 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.108 16:32:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:24.108 16:32:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:17:24.108 [2024-10-17 16:32:00.343565] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:17:24.108 [2024-10-17 16:32:00.343725] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59120 ] 00:17:24.367 [2024-10-17 16:32:00.522778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.367 [2024-10-17 16:32:00.659280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.367 [2024-10-17 16:32:00.659366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.367 [2024-10-17 16:32:00.659384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.367 [2024-10-17 16:32:00.659395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:25.304 16:32:01 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:25.304 16:32:01 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:17:25.304 16:32:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:17:25.304 16:32:01 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.304 16:32:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:25.304 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:25.304 POWER: Cannot set governor of lcore 0 to userspace 00:17:25.304 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:25.304 POWER: Cannot set governor of lcore 0 to performance 00:17:25.304 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:25.304 POWER: Cannot set governor of lcore 0 to userspace 00:17:25.304 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:25.304 POWER: Cannot set governor of lcore 0 to userspace 00:17:25.304 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:17:25.304 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:17:25.304 POWER: Unable to set Power Management Environment for lcore 0 00:17:25.304 [2024-10-17 16:32:01.236693] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:17:25.304 [2024-10-17 16:32:01.236738] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:17:25.304 [2024-10-17 16:32:01.236754] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:17:25.304 [2024-10-17 16:32:01.236775] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:17:25.304 [2024-10-17 16:32:01.236787] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:17:25.304 [2024-10-17 16:32:01.236800] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:17:25.304 16:32:01 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.304 16:32:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:17:25.304 16:32:01 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.304 16:32:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:25.304 [2024-10-17 16:32:01.573889] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:17:25.304 16:32:01 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.304 16:32:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:17:25.304 16:32:01 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:25.304 16:32:01 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.304 16:32:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:25.304 ************************************ 00:17:25.304 START TEST scheduler_create_thread 00:17:25.304 ************************************ 00:17:25.304 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:17:25.304 16:32:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:17:25.305 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.305 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:25.565 2 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:25.565 3 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:25.565 4 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:25.565 5 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:25.565 6 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:25.565 7 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:25.565 8 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:25.565 9 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:25.565 10 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.565 16:32:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:26.943 16:32:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.943 16:32:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:17:26.943 16:32:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:17:26.943 16:32:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.943 16:32:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:27.879 16:32:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.879 16:32:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:17:27.879 16:32:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.879 16:32:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:28.447 16:32:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.447 16:32:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:17:28.447 16:32:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:17:28.447 16:32:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.447 16:32:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:29.396 ************************************ 00:17:29.396 END TEST scheduler_create_thread 00:17:29.396 ************************************ 00:17:29.396 16:32:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.396 00:17:29.396 real 0m3.886s 00:17:29.396 user 0m0.026s 00:17:29.396 sys 0m0.012s 00:17:29.396 16:32:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:29.396 16:32:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:17:29.396 16:32:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:17:29.396 16:32:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59120 00:17:29.396 16:32:05 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 59120 ']' 00:17:29.396 16:32:05 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 59120 00:17:29.396 16:32:05 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:17:29.396 16:32:05 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:29.396 16:32:05 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59120 00:17:29.396 killing process with pid 59120 00:17:29.396 16:32:05 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:29.396 16:32:05 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:29.396 16:32:05 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59120' 00:17:29.396 16:32:05 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 59120 00:17:29.396 16:32:05 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 59120 00:17:29.654 [2024-10-17 16:32:05.857019] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:17:31.032 00:17:31.032 real 0m6.978s 00:17:31.032 user 0m14.469s 00:17:31.032 sys 0m0.556s 00:17:31.032 16:32:07 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:31.032 16:32:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:17:31.032 ************************************ 00:17:31.032 END TEST event_scheduler 00:17:31.032 ************************************ 00:17:31.032 16:32:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:17:31.032 16:32:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:17:31.032 16:32:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:31.032 16:32:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:31.032 16:32:07 event -- common/autotest_common.sh@10 -- # set +x 00:17:31.032 ************************************ 00:17:31.032 START TEST app_repeat 00:17:31.032 ************************************ 00:17:31.032 16:32:07 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:17:31.032 16:32:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.032 16:32:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:31.032 16:32:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:17:31.032 16:32:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:31.032 16:32:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:17:31.032 16:32:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:17:31.032 16:32:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:17:31.032 Process app_repeat pid: 59237 00:17:31.032 spdk_app_start Round 0 00:17:31.032 16:32:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59237 00:17:31.032 16:32:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:17:31.032 16:32:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59237' 00:17:31.032 16:32:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:17:31.032 16:32:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:17:31.032 16:32:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59237 /var/tmp/spdk-nbd.sock 00:17:31.032 16:32:07 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:17:31.032 16:32:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59237 ']' 00:17:31.032 16:32:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:31.032 16:32:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:31.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:31.032 16:32:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:31.032 16:32:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:31.032 16:32:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:31.032 [2024-10-17 16:32:07.164757] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:17:31.032 [2024-10-17 16:32:07.164873] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59237 ] 00:17:31.292 [2024-10-17 16:32:07.331692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:31.292 [2024-10-17 16:32:07.444855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.292 [2024-10-17 16:32:07.444888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.860 16:32:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:31.860 16:32:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:17:31.860 16:32:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:32.130 Malloc0 00:17:32.130 16:32:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:32.399 Malloc1 00:17:32.399 16:32:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:32.399 16:32:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.399 16:32:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:32.399 16:32:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:32.399 16:32:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:32.399 16:32:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:32.399 16:32:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:32.399 16:32:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.399 16:32:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:32.399 16:32:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:32.400 16:32:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:32.400 16:32:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:32.400 16:32:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:17:32.400 16:32:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:32.400 16:32:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.400 16:32:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:32.658 /dev/nbd0 00:17:32.916 16:32:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:32.916 16:32:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:32.916 16:32:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:32.916 16:32:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:17:32.916 16:32:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:32.916 16:32:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:32.916 16:32:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:32.916 16:32:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:17:32.916 16:32:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:32.916 16:32:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:32.916 16:32:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:32.916 1+0 records in 00:17:32.916 1+0 records out 00:17:32.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268215 s, 15.3 MB/s 00:17:32.916 16:32:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:32.916 16:32:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:17:32.916 16:32:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:32.916 16:32:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:32.916 16:32:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:17:32.916 16:32:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:32.916 16:32:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:32.916 16:32:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:32.916 /dev/nbd1 00:17:33.174 16:32:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:33.174 16:32:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:33.174 16:32:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:33.174 16:32:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:17:33.174 16:32:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:33.174 16:32:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:33.174 16:32:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:33.174 16:32:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:17:33.174 16:32:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:33.174 16:32:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:33.174 16:32:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:33.174 1+0 records in 00:17:33.174 1+0 records out 00:17:33.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420325 s, 9.7 MB/s 00:17:33.174 16:32:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:33.174 16:32:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:17:33.174 16:32:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:33.174 16:32:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:33.174 16:32:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:17:33.174 16:32:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:33.174 16:32:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:33.174 16:32:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:33.174 16:32:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:33.174 16:32:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:33.435 { 00:17:33.435 "nbd_device": "/dev/nbd0", 00:17:33.435 "bdev_name": "Malloc0" 00:17:33.435 }, 00:17:33.435 { 00:17:33.435 "nbd_device": "/dev/nbd1", 00:17:33.435 "bdev_name": "Malloc1" 00:17:33.435 } 00:17:33.435 ]' 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:33.435 { 00:17:33.435 "nbd_device": "/dev/nbd0", 00:17:33.435 "bdev_name": "Malloc0" 00:17:33.435 }, 00:17:33.435 { 00:17:33.435 "nbd_device": "/dev/nbd1", 00:17:33.435 "bdev_name": "Malloc1" 00:17:33.435 } 00:17:33.435 ]' 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:33.435 /dev/nbd1' 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:33.435 /dev/nbd1' 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:33.435 256+0 records in 00:17:33.435 256+0 records out 00:17:33.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131791 s, 79.6 MB/s 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:33.435 256+0 records in 00:17:33.435 256+0 records out 00:17:33.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275224 s, 38.1 MB/s 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:33.435 256+0 records in 00:17:33.435 256+0 records out 00:17:33.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248632 s, 42.2 MB/s 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.435 16:32:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:33.693 16:32:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:33.694 16:32:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:33.694 16:32:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:33.694 16:32:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.694 16:32:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.694 16:32:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:33.694 16:32:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:33.694 16:32:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.694 16:32:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:33.694 16:32:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:33.953 16:32:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:33.953 16:32:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:33.953 16:32:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:33.953 16:32:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:33.953 16:32:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:33.953 16:32:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:33.953 16:32:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:33.953 16:32:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:33.953 16:32:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:33.953 16:32:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:33.953 16:32:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:34.211 16:32:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:34.212 16:32:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:34.212 16:32:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:34.212 16:32:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:34.212 16:32:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:17:34.212 16:32:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:34.212 16:32:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:17:34.212 16:32:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:17:34.212 16:32:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:17:34.212 16:32:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:17:34.212 16:32:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:34.212 16:32:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:17:34.212 16:32:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:34.781 16:32:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:17:36.157 [2024-10-17 16:32:12.080041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:36.157 [2024-10-17 16:32:12.195601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.157 [2024-10-17 16:32:12.195603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.157 [2024-10-17 16:32:12.391649] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:36.157 [2024-10-17 16:32:12.391751] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:38.063 spdk_app_start Round 1 00:17:38.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:38.063 16:32:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:17:38.063 16:32:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:17:38.063 16:32:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59237 /var/tmp/spdk-nbd.sock 00:17:38.063 16:32:13 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59237 ']' 00:17:38.063 16:32:13 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:38.063 16:32:13 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:38.063 16:32:13 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:38.063 16:32:13 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:38.063 16:32:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:38.063 16:32:14 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.063 16:32:14 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:17:38.063 16:32:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:38.322 Malloc0 00:17:38.322 16:32:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:38.582 Malloc1 00:17:38.582 16:32:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:38.582 16:32:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:38.841 /dev/nbd0 00:17:38.841 16:32:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:38.841 16:32:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:38.841 16:32:14 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:38.841 16:32:14 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:17:38.841 16:32:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:38.841 16:32:14 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:38.841 16:32:14 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:38.841 16:32:14 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:17:38.841 16:32:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:38.841 16:32:14 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:38.841 16:32:14 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:38.841 1+0 records in 00:17:38.841 1+0 records out 00:17:38.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646479 s, 6.3 MB/s 00:17:38.841 16:32:14 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:38.841 16:32:14 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:17:38.841 16:32:14 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:38.841 16:32:14 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:38.841 16:32:14 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:17:38.841 16:32:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:38.841 16:32:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:38.841 16:32:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:39.100 /dev/nbd1 00:17:39.100 16:32:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:39.100 16:32:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:39.100 16:32:15 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:39.100 16:32:15 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:17:39.100 16:32:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:39.100 16:32:15 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:39.100 16:32:15 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:39.100 16:32:15 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:17:39.100 16:32:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:39.100 16:32:15 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:39.100 16:32:15 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:39.100 1+0 records in 00:17:39.100 1+0 records out 00:17:39.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362263 s, 11.3 MB/s 00:17:39.100 16:32:15 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:39.100 16:32:15 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:17:39.100 16:32:15 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:39.100 16:32:15 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:39.100 16:32:15 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:17:39.100 16:32:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:39.100 16:32:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:39.100 16:32:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:39.100 16:32:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:39.100 16:32:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:39.359 { 00:17:39.359 "nbd_device": "/dev/nbd0", 00:17:39.359 "bdev_name": "Malloc0" 00:17:39.359 }, 00:17:39.359 { 00:17:39.359 "nbd_device": "/dev/nbd1", 00:17:39.359 "bdev_name": "Malloc1" 00:17:39.359 } 00:17:39.359 ]' 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:39.359 { 00:17:39.359 "nbd_device": "/dev/nbd0", 00:17:39.359 "bdev_name": "Malloc0" 00:17:39.359 }, 00:17:39.359 { 00:17:39.359 "nbd_device": "/dev/nbd1", 00:17:39.359 "bdev_name": "Malloc1" 00:17:39.359 } 00:17:39.359 ]' 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:39.359 /dev/nbd1' 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:39.359 /dev/nbd1' 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:39.359 256+0 records in 00:17:39.359 256+0 records out 00:17:39.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00531983 s, 197 MB/s 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:39.359 256+0 records in 00:17:39.359 256+0 records out 00:17:39.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0292814 s, 35.8 MB/s 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:39.359 256+0 records in 00:17:39.359 256+0 records out 00:17:39.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295186 s, 35.5 MB/s 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:39.359 16:32:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:39.619 16:32:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:39.619 16:32:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:39.619 16:32:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:39.619 16:32:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:39.619 16:32:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:39.619 16:32:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:39.619 16:32:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:39.619 16:32:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:39.619 16:32:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:39.619 16:32:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:39.877 16:32:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:39.877 16:32:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:39.877 16:32:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:39.877 16:32:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:39.877 16:32:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:39.877 16:32:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:39.877 16:32:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:39.877 16:32:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:39.877 16:32:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:39.877 16:32:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:39.877 16:32:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:40.136 16:32:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:40.136 16:32:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:40.136 16:32:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:40.136 16:32:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:40.136 16:32:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:17:40.136 16:32:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:40.136 16:32:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:17:40.136 16:32:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:17:40.136 16:32:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:17:40.136 16:32:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:17:40.136 16:32:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:40.136 16:32:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:17:40.136 16:32:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:40.703 16:32:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:17:42.082 [2024-10-17 16:32:17.947612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:42.082 [2024-10-17 16:32:18.066082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.082 [2024-10-17 16:32:18.066103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.082 [2024-10-17 16:32:18.259068] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:42.082 [2024-10-17 16:32:18.259170] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:43.984 16:32:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:17:43.984 spdk_app_start Round 2 00:17:43.984 16:32:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:17:43.984 16:32:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59237 /var/tmp/spdk-nbd.sock 00:17:43.984 16:32:19 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59237 ']' 00:17:43.984 16:32:19 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:43.984 16:32:19 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:43.984 16:32:19 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:43.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:43.984 16:32:19 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:43.984 16:32:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:43.984 16:32:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:43.984 16:32:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:17:43.984 16:32:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:44.295 Malloc0 00:17:44.295 16:32:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:44.295 Malloc1 00:17:44.553 16:32:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:44.553 /dev/nbd0 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:44.553 16:32:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:44.553 16:32:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:44.553 16:32:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:17:44.553 16:32:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:44.553 16:32:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:44.553 16:32:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:44.812 16:32:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:17:44.812 16:32:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:44.812 16:32:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:44.812 16:32:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:44.812 1+0 records in 00:17:44.812 1+0 records out 00:17:44.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303667 s, 13.5 MB/s 00:17:44.812 16:32:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:44.812 16:32:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:17:44.812 16:32:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:44.812 16:32:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:44.812 16:32:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:17:44.812 16:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:44.812 16:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:44.812 16:32:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:44.812 /dev/nbd1 00:17:44.812 16:32:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:44.812 16:32:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:44.812 16:32:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:44.812 16:32:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:17:44.812 16:32:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:44.812 16:32:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:44.812 16:32:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:45.071 16:32:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:17:45.071 16:32:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:45.071 16:32:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:45.071 16:32:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:45.071 1+0 records in 00:17:45.071 1+0 records out 00:17:45.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388463 s, 10.5 MB/s 00:17:45.071 16:32:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:45.071 16:32:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:17:45.071 16:32:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:45.071 16:32:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:45.071 16:32:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:17:45.071 16:32:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:45.071 16:32:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:45.071 16:32:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:45.071 16:32:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:45.071 16:32:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:45.071 16:32:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:45.071 { 00:17:45.071 "nbd_device": "/dev/nbd0", 00:17:45.071 "bdev_name": "Malloc0" 00:17:45.071 }, 00:17:45.071 { 00:17:45.071 "nbd_device": "/dev/nbd1", 00:17:45.071 "bdev_name": "Malloc1" 00:17:45.071 } 00:17:45.071 ]' 00:17:45.071 16:32:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:45.071 16:32:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:45.071 { 00:17:45.071 "nbd_device": "/dev/nbd0", 00:17:45.071 "bdev_name": "Malloc0" 00:17:45.071 }, 00:17:45.071 { 00:17:45.071 "nbd_device": "/dev/nbd1", 00:17:45.071 "bdev_name": "Malloc1" 00:17:45.071 } 00:17:45.071 ]' 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:45.330 /dev/nbd1' 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:45.330 /dev/nbd1' 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:45.330 256+0 records in 00:17:45.330 256+0 records out 00:17:45.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00733378 s, 143 MB/s 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:45.330 256+0 records in 00:17:45.330 256+0 records out 00:17:45.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268066 s, 39.1 MB/s 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:45.330 256+0 records in 00:17:45.330 256+0 records out 00:17:45.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308089 s, 34.0 MB/s 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.330 16:32:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:45.589 16:32:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:45.589 16:32:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:45.589 16:32:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:45.589 16:32:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:45.589 16:32:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:45.589 16:32:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:45.589 16:32:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:45.589 16:32:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:45.589 16:32:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:45.589 16:32:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:45.847 16:32:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:45.847 16:32:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:45.847 16:32:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:45.847 16:32:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:45.847 16:32:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:45.847 16:32:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:45.847 16:32:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:17:45.847 16:32:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:17:45.847 16:32:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:45.847 16:32:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:45.847 16:32:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:46.107 16:32:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:46.107 16:32:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:46.107 16:32:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:46.107 16:32:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:46.107 16:32:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:17:46.107 16:32:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:46.107 16:32:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:17:46.107 16:32:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:17:46.107 16:32:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:17:46.107 16:32:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:17:46.107 16:32:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:46.107 16:32:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:17:46.107 16:32:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:46.366 16:32:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:17:47.745 [2024-10-17 16:32:23.770155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:47.745 [2024-10-17 16:32:23.883822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.745 [2024-10-17 16:32:23.883827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.004 [2024-10-17 16:32:24.071540] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:48.004 [2024-10-17 16:32:24.071748] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:49.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:49.383 16:32:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59237 /var/tmp/spdk-nbd.sock 00:17:49.383 16:32:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59237 ']' 00:17:49.383 16:32:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:49.383 16:32:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:49.383 16:32:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:49.383 16:32:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:49.383 16:32:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:49.642 16:32:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:49.642 16:32:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:17:49.642 16:32:25 event.app_repeat -- event/event.sh@39 -- # killprocess 59237 00:17:49.642 16:32:25 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 59237 ']' 00:17:49.642 16:32:25 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 59237 00:17:49.642 16:32:25 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:17:49.642 16:32:25 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:49.642 16:32:25 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59237 00:17:49.642 killing process with pid 59237 00:17:49.642 16:32:25 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:49.642 16:32:25 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:49.642 16:32:25 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59237' 00:17:49.642 16:32:25 event.app_repeat -- common/autotest_common.sh@969 -- # kill 59237 00:17:49.642 16:32:25 event.app_repeat -- common/autotest_common.sh@974 -- # wait 59237 00:17:51.022 spdk_app_start is called in Round 0. 00:17:51.022 Shutdown signal received, stop current app iteration 00:17:51.022 Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 reinitialization... 00:17:51.022 spdk_app_start is called in Round 1. 00:17:51.022 Shutdown signal received, stop current app iteration 00:17:51.022 Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 reinitialization... 00:17:51.022 spdk_app_start is called in Round 2. 00:17:51.022 Shutdown signal received, stop current app iteration 00:17:51.022 Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 reinitialization... 00:17:51.022 spdk_app_start is called in Round 3. 00:17:51.022 Shutdown signal received, stop current app iteration 00:17:51.022 16:32:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:17:51.022 16:32:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:17:51.022 00:17:51.022 real 0m19.836s 00:17:51.022 user 0m42.432s 00:17:51.022 sys 0m3.186s 00:17:51.022 16:32:26 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.022 ************************************ 00:17:51.022 END TEST app_repeat 00:17:51.023 ************************************ 00:17:51.023 16:32:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:17:51.023 16:32:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:17:51.023 16:32:26 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:17:51.023 16:32:26 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:51.023 16:32:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.023 16:32:26 event -- common/autotest_common.sh@10 -- # set +x 00:17:51.023 ************************************ 00:17:51.023 START TEST cpu_locks 00:17:51.023 ************************************ 00:17:51.023 16:32:27 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:17:51.023 * Looking for test storage... 00:17:51.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:17:51.023 16:32:27 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:51.023 16:32:27 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:17:51.023 16:32:27 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:51.023 16:32:27 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.023 16:32:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:17:51.023 16:32:27 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.023 16:32:27 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:51.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.023 --rc genhtml_branch_coverage=1 00:17:51.023 --rc genhtml_function_coverage=1 00:17:51.023 --rc genhtml_legend=1 00:17:51.023 --rc geninfo_all_blocks=1 00:17:51.023 --rc geninfo_unexecuted_blocks=1 00:17:51.023 00:17:51.023 ' 00:17:51.023 16:32:27 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:51.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.023 --rc genhtml_branch_coverage=1 00:17:51.023 --rc genhtml_function_coverage=1 00:17:51.023 --rc genhtml_legend=1 00:17:51.023 --rc geninfo_all_blocks=1 00:17:51.023 --rc geninfo_unexecuted_blocks=1 00:17:51.023 00:17:51.023 ' 00:17:51.023 16:32:27 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:51.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.023 --rc genhtml_branch_coverage=1 00:17:51.023 --rc genhtml_function_coverage=1 00:17:51.023 --rc genhtml_legend=1 00:17:51.023 --rc geninfo_all_blocks=1 00:17:51.023 --rc geninfo_unexecuted_blocks=1 00:17:51.023 00:17:51.023 ' 00:17:51.023 16:32:27 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:51.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.023 --rc genhtml_branch_coverage=1 00:17:51.023 --rc genhtml_function_coverage=1 00:17:51.023 --rc genhtml_legend=1 00:17:51.023 --rc geninfo_all_blocks=1 00:17:51.023 --rc geninfo_unexecuted_blocks=1 00:17:51.023 00:17:51.023 ' 00:17:51.023 16:32:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:17:51.023 16:32:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:17:51.023 16:32:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:17:51.023 16:32:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:17:51.023 16:32:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:51.023 16:32:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.023 16:32:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:51.023 ************************************ 00:17:51.023 START TEST default_locks 00:17:51.023 ************************************ 00:17:51.023 16:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:17:51.023 16:32:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59696 00:17:51.023 16:32:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:51.023 16:32:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59696 00:17:51.023 16:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59696 ']' 00:17:51.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.023 16:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.023 16:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.023 16:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.023 16:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.023 16:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:17:51.283 [2024-10-17 16:32:27.369112] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:17:51.283 [2024-10-17 16:32:27.369417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59696 ] 00:17:51.283 [2024-10-17 16:32:27.543062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.542 [2024-10-17 16:32:27.667800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.478 16:32:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:52.478 16:32:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:17:52.478 16:32:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59696 00:17:52.478 16:32:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59696 00:17:52.478 16:32:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:52.738 16:32:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59696 00:17:52.738 16:32:28 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 59696 ']' 00:17:52.738 16:32:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 59696 00:17:52.738 16:32:28 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:17:52.738 16:32:28 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:52.738 16:32:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59696 00:17:52.738 killing process with pid 59696 00:17:52.738 16:32:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:52.738 16:32:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:52.738 16:32:29 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59696' 00:17:52.738 16:32:29 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 59696 00:17:52.738 16:32:29 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 59696 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59696 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59696 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59696 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59696 ']' 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:55.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:17:55.274 ERROR: process (pid: 59696) is no longer running 00:17:55.274 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59696) - No such process 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:17:55.274 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:17:55.275 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:55.275 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:55.275 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:55.275 16:32:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:17:55.275 16:32:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:17:55.275 16:32:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:17:55.275 16:32:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:17:55.275 00:17:55.275 real 0m4.167s 00:17:55.275 user 0m4.129s 00:17:55.275 sys 0m0.683s 00:17:55.275 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:55.275 ************************************ 00:17:55.275 END TEST default_locks 00:17:55.275 ************************************ 00:17:55.275 16:32:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:17:55.275 16:32:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:17:55.275 16:32:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:55.275 16:32:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:55.275 16:32:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:55.275 ************************************ 00:17:55.275 START TEST default_locks_via_rpc 00:17:55.275 ************************************ 00:17:55.275 16:32:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:17:55.275 16:32:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59771 00:17:55.275 16:32:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:55.275 16:32:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59771 00:17:55.275 16:32:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59771 ']' 00:17:55.275 16:32:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.275 16:32:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:55.275 16:32:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.275 16:32:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:55.275 16:32:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:55.533 [2024-10-17 16:32:31.615243] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:17:55.533 [2024-10-17 16:32:31.615603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59771 ] 00:17:55.533 [2024-10-17 16:32:31.773688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.792 [2024-10-17 16:32:31.896282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59771 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59771 00:17:56.729 16:32:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:17:56.989 16:32:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59771 00:17:56.989 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 59771 ']' 00:17:56.989 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 59771 00:17:56.989 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:17:56.989 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.247 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59771 00:17:57.247 killing process with pid 59771 00:17:57.247 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:57.247 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:57.247 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59771' 00:17:57.247 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 59771 00:17:57.247 16:32:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 59771 00:17:59.781 00:17:59.781 real 0m4.199s 00:17:59.781 user 0m4.260s 00:17:59.781 sys 0m0.694s 00:17:59.781 16:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:59.781 ************************************ 00:17:59.781 END TEST default_locks_via_rpc 00:17:59.781 ************************************ 00:17:59.781 16:32:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:59.781 16:32:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:17:59.781 16:32:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:59.781 16:32:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:59.781 16:32:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:17:59.781 ************************************ 00:17:59.781 START TEST non_locking_app_on_locked_coremask 00:17:59.781 ************************************ 00:17:59.781 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:17:59.781 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59847 00:17:59.781 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:59.781 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59847 /var/tmp/spdk.sock 00:17:59.781 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59847 ']' 00:17:59.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.781 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.781 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:59.781 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.781 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:59.781 16:32:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:17:59.781 [2024-10-17 16:32:35.886473] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:17:59.781 [2024-10-17 16:32:35.886606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59847 ] 00:17:59.781 [2024-10-17 16:32:36.058984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.040 [2024-10-17 16:32:36.180852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.980 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:00.980 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:18:00.980 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59863 00:18:00.980 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59863 /var/tmp/spdk2.sock 00:18:00.980 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:18:00.980 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59863 ']' 00:18:00.980 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:00.980 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:00.980 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:00.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:00.980 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:00.980 16:32:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:00.980 [2024-10-17 16:32:37.170251] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:00.980 [2024-10-17 16:32:37.170527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59863 ] 00:18:01.238 [2024-10-17 16:32:37.339530] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:01.238 [2024-10-17 16:32:37.339586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.496 [2024-10-17 16:32:37.577231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.034 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.034 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:18:04.034 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59847 00:18:04.034 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59847 00:18:04.034 16:32:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:04.293 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59847 00:18:04.293 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59847 ']' 00:18:04.294 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59847 00:18:04.553 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:18:04.553 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:04.553 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59847 00:18:04.553 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:04.553 killing process with pid 59847 00:18:04.553 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:04.553 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59847' 00:18:04.553 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59847 00:18:04.553 16:32:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59847 00:18:09.826 16:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59863 00:18:09.826 16:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59863 ']' 00:18:09.826 16:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59863 00:18:09.826 16:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:18:09.826 16:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:09.826 16:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59863 00:18:09.826 killing process with pid 59863 00:18:09.826 16:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:09.826 16:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:09.826 16:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59863' 00:18:09.826 16:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59863 00:18:09.826 16:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59863 00:18:11.730 ************************************ 00:18:11.730 END TEST non_locking_app_on_locked_coremask 00:18:11.730 ************************************ 00:18:11.730 00:18:11.730 real 0m12.023s 00:18:11.730 user 0m12.446s 00:18:11.730 sys 0m1.419s 00:18:11.730 16:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:11.730 16:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:11.730 16:32:47 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:18:11.730 16:32:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:11.731 16:32:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:11.731 16:32:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:11.731 ************************************ 00:18:11.731 START TEST locking_app_on_unlocked_coremask 00:18:11.731 ************************************ 00:18:11.731 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:18:11.731 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60021 00:18:11.731 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:18:11.731 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60021 /var/tmp/spdk.sock 00:18:11.731 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60021 ']' 00:18:11.731 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.731 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.731 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.731 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.731 16:32:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:11.731 [2024-10-17 16:32:47.977846] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:11.731 [2024-10-17 16:32:47.977973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60021 ] 00:18:11.990 [2024-10-17 16:32:48.149895] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:11.990 [2024-10-17 16:32:48.150218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.990 [2024-10-17 16:32:48.269872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.928 16:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.928 16:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:18:12.928 16:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60037 00:18:12.928 16:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60037 /var/tmp/spdk2.sock 00:18:12.928 16:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:18:12.928 16:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60037 ']' 00:18:12.928 16:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:12.928 16:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.928 16:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:12.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:12.928 16:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.928 16:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:13.187 [2024-10-17 16:32:49.255504] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:13.187 [2024-10-17 16:32:49.255958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60037 ] 00:18:13.187 [2024-10-17 16:32:49.422971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.447 [2024-10-17 16:32:49.672855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.981 16:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:15.981 16:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:18:15.981 16:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60037 00:18:15.981 16:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60037 00:18:15.981 16:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:16.550 16:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60021 00:18:16.550 16:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60021 ']' 00:18:16.550 16:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60021 00:18:16.550 16:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:18:16.550 16:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:16.550 16:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60021 00:18:16.550 killing process with pid 60021 00:18:16.550 16:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:16.550 16:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:16.550 16:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60021' 00:18:16.550 16:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60021 00:18:16.550 16:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60021 00:18:21.823 16:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60037 00:18:21.823 16:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60037 ']' 00:18:21.823 16:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 60037 00:18:21.823 16:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:18:21.823 16:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:21.823 16:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60037 00:18:21.823 killing process with pid 60037 00:18:21.823 16:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:21.823 16:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:21.823 16:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60037' 00:18:21.823 16:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 60037 00:18:21.823 16:32:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 60037 00:18:23.745 00:18:23.745 real 0m12.026s 00:18:23.745 user 0m12.347s 00:18:23.745 sys 0m1.407s 00:18:23.745 16:32:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:23.745 ************************************ 00:18:23.745 16:32:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:23.745 END TEST locking_app_on_unlocked_coremask 00:18:23.745 ************************************ 00:18:23.745 16:32:59 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:18:23.745 16:32:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:23.745 16:32:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:23.745 16:32:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:23.745 ************************************ 00:18:23.745 START TEST locking_app_on_locked_coremask 00:18:23.745 ************************************ 00:18:23.745 16:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:18:23.745 16:32:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60191 00:18:23.745 16:32:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:23.745 16:32:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60191 /var/tmp/spdk.sock 00:18:23.745 16:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60191 ']' 00:18:23.745 16:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.745 16:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:23.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.745 16:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.745 16:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:23.745 16:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:24.004 [2024-10-17 16:33:00.080974] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:24.004 [2024-10-17 16:33:00.081356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60191 ] 00:18:24.004 [2024-10-17 16:33:00.252974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.263 [2024-10-17 16:33:00.371989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60212 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60212 /var/tmp/spdk2.sock 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60212 /var/tmp/spdk2.sock 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60212 /var/tmp/spdk2.sock 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60212 ']' 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:25.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:25.196 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:25.196 [2024-10-17 16:33:01.355613] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:25.196 [2024-10-17 16:33:01.355755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60212 ] 00:18:25.453 [2024-10-17 16:33:01.523676] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60191 has claimed it. 00:18:25.453 [2024-10-17 16:33:01.523755] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:18:25.711 ERROR: process (pid: 60212) is no longer running 00:18:25.711 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60212) - No such process 00:18:25.711 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.712 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:18:25.712 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:18:25.712 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:25.712 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:25.712 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:25.712 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60191 00:18:25.712 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60191 00:18:25.712 16:33:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:26.280 16:33:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60191 00:18:26.280 16:33:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60191 ']' 00:18:26.280 16:33:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60191 00:18:26.280 16:33:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:18:26.280 16:33:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:26.280 16:33:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60191 00:18:26.280 killing process with pid 60191 00:18:26.280 16:33:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:26.280 16:33:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:26.280 16:33:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60191' 00:18:26.280 16:33:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60191 00:18:26.280 16:33:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60191 00:18:28.888 00:18:28.888 real 0m4.959s 00:18:28.888 user 0m5.127s 00:18:28.888 sys 0m0.884s 00:18:28.888 16:33:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:28.888 16:33:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:28.888 ************************************ 00:18:28.888 END TEST locking_app_on_locked_coremask 00:18:28.888 ************************************ 00:18:28.888 16:33:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:18:28.888 16:33:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:28.888 16:33:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:28.888 16:33:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:28.888 ************************************ 00:18:28.888 START TEST locking_overlapped_coremask 00:18:28.888 ************************************ 00:18:28.888 16:33:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:18:28.888 16:33:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60282 00:18:28.888 16:33:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:28.888 16:33:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60282 /var/tmp/spdk.sock 00:18:28.888 16:33:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60282 ']' 00:18:28.888 16:33:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.888 16:33:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:28.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.888 16:33:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.888 16:33:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:28.888 16:33:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:28.888 [2024-10-17 16:33:05.107584] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:28.888 [2024-10-17 16:33:05.107724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60282 ] 00:18:29.145 [2024-10-17 16:33:05.280825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:29.145 [2024-10-17 16:33:05.405430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.145 [2024-10-17 16:33:05.405595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.145 [2024-10-17 16:33:05.405630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60300 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60300 /var/tmp/spdk2.sock 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60300 /var/tmp/spdk2.sock 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60300 /var/tmp/spdk2.sock 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60300 ']' 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:30.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:30.085 16:33:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:30.085 [2024-10-17 16:33:06.376978] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:30.085 [2024-10-17 16:33:06.377103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60300 ] 00:18:30.349 [2024-10-17 16:33:06.545714] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60282 has claimed it. 00:18:30.349 [2024-10-17 16:33:06.545780] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:18:30.925 ERROR: process (pid: 60300) is no longer running 00:18:30.925 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60300) - No such process 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60282 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60282 ']' 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60282 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60282 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:30.925 killing process with pid 60282 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60282' 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60282 00:18:30.925 16:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60282 00:18:33.480 00:18:33.480 real 0m4.492s 00:18:33.480 user 0m12.132s 00:18:33.480 sys 0m0.652s 00:18:33.480 ************************************ 00:18:33.480 END TEST locking_overlapped_coremask 00:18:33.480 ************************************ 00:18:33.480 16:33:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:33.480 16:33:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:18:33.480 16:33:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:18:33.480 16:33:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:33.480 16:33:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:33.480 16:33:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:33.480 ************************************ 00:18:33.480 START TEST locking_overlapped_coremask_via_rpc 00:18:33.480 ************************************ 00:18:33.480 16:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:18:33.480 16:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60364 00:18:33.480 16:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60364 /var/tmp/spdk.sock 00:18:33.480 16:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60364 ']' 00:18:33.480 16:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.480 16:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.480 16:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.480 16:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.480 16:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.480 16:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:18:33.480 [2024-10-17 16:33:09.658940] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:33.480 [2024-10-17 16:33:09.659064] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60364 ] 00:18:33.739 [2024-10-17 16:33:09.831709] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:33.739 [2024-10-17 16:33:09.831771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:33.739 [2024-10-17 16:33:09.960778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.739 [2024-10-17 16:33:09.960883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.739 [2024-10-17 16:33:09.960911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.672 16:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.672 16:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:18:34.672 16:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60387 00:18:34.672 16:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60387 /var/tmp/spdk2.sock 00:18:34.672 16:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:18:34.672 16:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60387 ']' 00:18:34.672 16:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:34.672 16:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:34.672 16:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:34.672 16:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.672 16:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:34.672 [2024-10-17 16:33:10.955696] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:34.672 [2024-10-17 16:33:10.955827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60387 ] 00:18:34.930 [2024-10-17 16:33:11.124133] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:34.930 [2024-10-17 16:33:11.124185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:35.188 [2024-10-17 16:33:11.370959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:35.188 [2024-10-17 16:33:11.371102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.188 [2024-10-17 16:33:11.371138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:37.717 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.717 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:18:37.717 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:18:37.717 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.718 [2024-10-17 16:33:13.531946] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60364 has claimed it. 00:18:37.718 request: 00:18:37.718 { 00:18:37.718 "method": "framework_enable_cpumask_locks", 00:18:37.718 "req_id": 1 00:18:37.718 } 00:18:37.718 Got JSON-RPC error response 00:18:37.718 response: 00:18:37.718 { 00:18:37.718 "code": -32603, 00:18:37.718 "message": "Failed to claim CPU core: 2" 00:18:37.718 } 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60364 /var/tmp/spdk.sock 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60364 ']' 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60387 /var/tmp/spdk2.sock 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60387 ']' 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:37.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:37.718 16:33:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.976 16:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.976 16:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:18:37.976 16:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:18:37.976 16:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:18:37.976 16:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:18:37.976 16:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:18:37.976 ************************************ 00:18:37.976 END TEST locking_overlapped_coremask_via_rpc 00:18:37.976 ************************************ 00:18:37.976 00:18:37.976 real 0m4.484s 00:18:37.976 user 0m1.309s 00:18:37.976 sys 0m0.248s 00:18:37.976 16:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:37.976 16:33:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:37.976 16:33:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:18:37.976 16:33:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60364 ]] 00:18:37.976 16:33:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60364 00:18:37.976 16:33:14 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60364 ']' 00:18:37.976 16:33:14 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60364 00:18:37.976 16:33:14 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:18:37.976 16:33:14 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.976 16:33:14 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60364 00:18:37.976 killing process with pid 60364 00:18:37.976 16:33:14 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:37.976 16:33:14 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:37.976 16:33:14 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60364' 00:18:37.976 16:33:14 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60364 00:18:37.976 16:33:14 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60364 00:18:40.503 16:33:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60387 ]] 00:18:40.503 16:33:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60387 00:18:40.503 16:33:16 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60387 ']' 00:18:40.503 16:33:16 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60387 00:18:40.503 16:33:16 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:18:40.503 16:33:16 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:40.504 16:33:16 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60387 00:18:40.504 killing process with pid 60387 00:18:40.504 16:33:16 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:40.504 16:33:16 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:40.504 16:33:16 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60387' 00:18:40.504 16:33:16 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60387 00:18:40.504 16:33:16 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60387 00:18:43.031 16:33:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:18:43.031 16:33:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:18:43.031 16:33:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60364 ]] 00:18:43.031 16:33:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60364 00:18:43.031 16:33:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60364 ']' 00:18:43.031 16:33:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60364 00:18:43.031 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60364) - No such process 00:18:43.031 Process with pid 60364 is not found 00:18:43.031 16:33:19 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60364 is not found' 00:18:43.031 16:33:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60387 ]] 00:18:43.031 16:33:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60387 00:18:43.031 16:33:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60387 ']' 00:18:43.031 16:33:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60387 00:18:43.031 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60387) - No such process 00:18:43.031 Process with pid 60387 is not found 00:18:43.031 16:33:19 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60387 is not found' 00:18:43.031 16:33:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:18:43.031 00:18:43.031 real 0m52.037s 00:18:43.031 user 1m28.367s 00:18:43.031 sys 0m7.239s 00:18:43.031 16:33:19 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:43.031 ************************************ 00:18:43.031 END TEST cpu_locks 00:18:43.031 ************************************ 00:18:43.031 16:33:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:18:43.031 00:18:43.031 real 1m24.304s 00:18:43.031 user 2m32.606s 00:18:43.031 sys 0m11.752s 00:18:43.031 16:33:19 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:43.031 16:33:19 event -- common/autotest_common.sh@10 -- # set +x 00:18:43.031 ************************************ 00:18:43.031 END TEST event 00:18:43.031 ************************************ 00:18:43.031 16:33:19 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:18:43.031 16:33:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:43.031 16:33:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:43.031 16:33:19 -- common/autotest_common.sh@10 -- # set +x 00:18:43.031 ************************************ 00:18:43.031 START TEST thread 00:18:43.031 ************************************ 00:18:43.031 16:33:19 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:18:43.031 * Looking for test storage... 00:18:43.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:18:43.031 16:33:19 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:43.031 16:33:19 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:18:43.031 16:33:19 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:43.289 16:33:19 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:43.289 16:33:19 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.289 16:33:19 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.289 16:33:19 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.289 16:33:19 thread -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.289 16:33:19 thread -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.289 16:33:19 thread -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.289 16:33:19 thread -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.289 16:33:19 thread -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.289 16:33:19 thread -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.289 16:33:19 thread -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.289 16:33:19 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.289 16:33:19 thread -- scripts/common.sh@344 -- # case "$op" in 00:18:43.289 16:33:19 thread -- scripts/common.sh@345 -- # : 1 00:18:43.289 16:33:19 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.289 16:33:19 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.289 16:33:19 thread -- scripts/common.sh@365 -- # decimal 1 00:18:43.289 16:33:19 thread -- scripts/common.sh@353 -- # local d=1 00:18:43.289 16:33:19 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.289 16:33:19 thread -- scripts/common.sh@355 -- # echo 1 00:18:43.289 16:33:19 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.289 16:33:19 thread -- scripts/common.sh@366 -- # decimal 2 00:18:43.289 16:33:19 thread -- scripts/common.sh@353 -- # local d=2 00:18:43.289 16:33:19 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.289 16:33:19 thread -- scripts/common.sh@355 -- # echo 2 00:18:43.289 16:33:19 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.289 16:33:19 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.289 16:33:19 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.289 16:33:19 thread -- scripts/common.sh@368 -- # return 0 00:18:43.289 16:33:19 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.289 16:33:19 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:43.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.289 --rc genhtml_branch_coverage=1 00:18:43.289 --rc genhtml_function_coverage=1 00:18:43.289 --rc genhtml_legend=1 00:18:43.289 --rc geninfo_all_blocks=1 00:18:43.289 --rc geninfo_unexecuted_blocks=1 00:18:43.289 00:18:43.289 ' 00:18:43.289 16:33:19 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:43.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.289 --rc genhtml_branch_coverage=1 00:18:43.289 --rc genhtml_function_coverage=1 00:18:43.289 --rc genhtml_legend=1 00:18:43.289 --rc geninfo_all_blocks=1 00:18:43.289 --rc geninfo_unexecuted_blocks=1 00:18:43.289 00:18:43.289 ' 00:18:43.289 16:33:19 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:43.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.289 --rc genhtml_branch_coverage=1 00:18:43.289 --rc genhtml_function_coverage=1 00:18:43.289 --rc genhtml_legend=1 00:18:43.289 --rc geninfo_all_blocks=1 00:18:43.289 --rc geninfo_unexecuted_blocks=1 00:18:43.289 00:18:43.289 ' 00:18:43.289 16:33:19 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:43.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.289 --rc genhtml_branch_coverage=1 00:18:43.289 --rc genhtml_function_coverage=1 00:18:43.289 --rc genhtml_legend=1 00:18:43.289 --rc geninfo_all_blocks=1 00:18:43.289 --rc geninfo_unexecuted_blocks=1 00:18:43.289 00:18:43.289 ' 00:18:43.289 16:33:19 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:18:43.289 16:33:19 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:18:43.289 16:33:19 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:43.289 16:33:19 thread -- common/autotest_common.sh@10 -- # set +x 00:18:43.289 ************************************ 00:18:43.289 START TEST thread_poller_perf 00:18:43.289 ************************************ 00:18:43.289 16:33:19 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:18:43.289 [2024-10-17 16:33:19.462617] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:43.289 [2024-10-17 16:33:19.462742] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60587 ] 00:18:43.546 [2024-10-17 16:33:19.632396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.546 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:18:43.546 [2024-10-17 16:33:19.748272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.918 [2024-10-17T16:33:21.217Z] ====================================== 00:18:44.918 [2024-10-17T16:33:21.217Z] busy:2498492156 (cyc) 00:18:44.918 [2024-10-17T16:33:21.217Z] total_run_count: 387000 00:18:44.918 [2024-10-17T16:33:21.217Z] tsc_hz: 2490000000 (cyc) 00:18:44.918 [2024-10-17T16:33:21.217Z] ====================================== 00:18:44.918 [2024-10-17T16:33:21.217Z] poller_cost: 6456 (cyc), 2592 (nsec) 00:18:44.918 00:18:44.918 real 0m1.571s 00:18:44.918 user 0m1.354s 00:18:44.918 sys 0m0.109s 00:18:44.918 16:33:20 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:44.918 16:33:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:18:44.918 ************************************ 00:18:44.918 END TEST thread_poller_perf 00:18:44.918 ************************************ 00:18:44.918 16:33:21 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:18:44.918 16:33:21 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:18:44.918 16:33:21 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:44.918 16:33:21 thread -- common/autotest_common.sh@10 -- # set +x 00:18:44.918 ************************************ 00:18:44.918 START TEST thread_poller_perf 00:18:44.918 ************************************ 00:18:44.918 16:33:21 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:18:44.918 [2024-10-17 16:33:21.105655] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:44.918 [2024-10-17 16:33:21.105983] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60619 ] 00:18:45.177 [2024-10-17 16:33:21.276373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.177 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:18:45.177 [2024-10-17 16:33:21.394978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.552 [2024-10-17T16:33:22.851Z] ====================================== 00:18:46.552 [2024-10-17T16:33:22.851Z] busy:2493694302 (cyc) 00:18:46.552 [2024-10-17T16:33:22.851Z] total_run_count: 5079000 00:18:46.552 [2024-10-17T16:33:22.851Z] tsc_hz: 2490000000 (cyc) 00:18:46.552 [2024-10-17T16:33:22.851Z] ====================================== 00:18:46.552 [2024-10-17T16:33:22.851Z] poller_cost: 490 (cyc), 196 (nsec) 00:18:46.552 00:18:46.552 real 0m1.559s 00:18:46.552 user 0m1.342s 00:18:46.552 sys 0m0.110s 00:18:46.552 16:33:22 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:46.552 ************************************ 00:18:46.552 END TEST thread_poller_perf 00:18:46.552 ************************************ 00:18:46.552 16:33:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:18:46.552 16:33:22 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:18:46.552 ************************************ 00:18:46.552 END TEST thread 00:18:46.552 ************************************ 00:18:46.552 00:18:46.552 real 0m3.504s 00:18:46.552 user 0m2.864s 00:18:46.552 sys 0m0.432s 00:18:46.552 16:33:22 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:46.552 16:33:22 thread -- common/autotest_common.sh@10 -- # set +x 00:18:46.552 16:33:22 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:18:46.552 16:33:22 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:46.552 16:33:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:46.552 16:33:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:46.552 16:33:22 -- common/autotest_common.sh@10 -- # set +x 00:18:46.552 ************************************ 00:18:46.552 START TEST app_cmdline 00:18:46.552 ************************************ 00:18:46.552 16:33:22 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:18:46.812 * Looking for test storage... 00:18:46.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@345 -- # : 1 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.812 16:33:22 app_cmdline -- scripts/common.sh@368 -- # return 0 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:46.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.812 --rc genhtml_branch_coverage=1 00:18:46.812 --rc genhtml_function_coverage=1 00:18:46.812 --rc genhtml_legend=1 00:18:46.812 --rc geninfo_all_blocks=1 00:18:46.812 --rc geninfo_unexecuted_blocks=1 00:18:46.812 00:18:46.812 ' 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:46.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.812 --rc genhtml_branch_coverage=1 00:18:46.812 --rc genhtml_function_coverage=1 00:18:46.812 --rc genhtml_legend=1 00:18:46.812 --rc geninfo_all_blocks=1 00:18:46.812 --rc geninfo_unexecuted_blocks=1 00:18:46.812 00:18:46.812 ' 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:46.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.812 --rc genhtml_branch_coverage=1 00:18:46.812 --rc genhtml_function_coverage=1 00:18:46.812 --rc genhtml_legend=1 00:18:46.812 --rc geninfo_all_blocks=1 00:18:46.812 --rc geninfo_unexecuted_blocks=1 00:18:46.812 00:18:46.812 ' 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:46.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.812 --rc genhtml_branch_coverage=1 00:18:46.812 --rc genhtml_function_coverage=1 00:18:46.812 --rc genhtml_legend=1 00:18:46.812 --rc geninfo_all_blocks=1 00:18:46.812 --rc geninfo_unexecuted_blocks=1 00:18:46.812 00:18:46.812 ' 00:18:46.812 16:33:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:18:46.812 16:33:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60708 00:18:46.812 16:33:22 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:18:46.812 16:33:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60708 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 60708 ']' 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:46.812 16:33:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:46.812 [2024-10-17 16:33:23.081839] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:46.812 [2024-10-17 16:33:23.082166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60708 ] 00:18:47.070 [2024-10-17 16:33:23.249771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.070 [2024-10-17 16:33:23.364748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.007 16:33:24 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:48.007 16:33:24 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:18:48.007 16:33:24 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:18:48.266 { 00:18:48.266 "version": "SPDK v25.01-pre git sha1 c1dd46fc6", 00:18:48.266 "fields": { 00:18:48.266 "major": 25, 00:18:48.266 "minor": 1, 00:18:48.266 "patch": 0, 00:18:48.266 "suffix": "-pre", 00:18:48.267 "commit": "c1dd46fc6" 00:18:48.267 } 00:18:48.267 } 00:18:48.267 16:33:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:18:48.267 16:33:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:18:48.267 16:33:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:18:48.267 16:33:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:18:48.267 16:33:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:18:48.267 16:33:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:18:48.267 16:33:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:18:48.267 16:33:24 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.267 16:33:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:48.267 16:33:24 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.267 16:33:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:18:48.267 16:33:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:18:48.267 16:33:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:48.267 16:33:24 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:18:48.267 16:33:24 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:48.267 16:33:24 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:48.267 16:33:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.267 16:33:24 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:48.267 16:33:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.267 16:33:24 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:48.267 16:33:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:48.267 16:33:24 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:48.267 16:33:24 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:48.267 16:33:24 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:18:48.525 request: 00:18:48.525 { 00:18:48.525 "method": "env_dpdk_get_mem_stats", 00:18:48.525 "req_id": 1 00:18:48.525 } 00:18:48.525 Got JSON-RPC error response 00:18:48.525 response: 00:18:48.525 { 00:18:48.525 "code": -32601, 00:18:48.525 "message": "Method not found" 00:18:48.525 } 00:18:48.525 16:33:24 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:18:48.525 16:33:24 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.525 16:33:24 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.525 16:33:24 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.525 16:33:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60708 00:18:48.525 16:33:24 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 60708 ']' 00:18:48.525 16:33:24 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 60708 00:18:48.525 16:33:24 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:18:48.525 16:33:24 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:48.525 16:33:24 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60708 00:18:48.525 killing process with pid 60708 00:18:48.525 16:33:24 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:48.525 16:33:24 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:48.525 16:33:24 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60708' 00:18:48.525 16:33:24 app_cmdline -- common/autotest_common.sh@969 -- # kill 60708 00:18:48.525 16:33:24 app_cmdline -- common/autotest_common.sh@974 -- # wait 60708 00:18:51.062 00:18:51.062 real 0m4.433s 00:18:51.062 user 0m4.582s 00:18:51.062 sys 0m0.687s 00:18:51.062 16:33:27 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:51.062 16:33:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:18:51.062 ************************************ 00:18:51.062 END TEST app_cmdline 00:18:51.062 ************************************ 00:18:51.062 16:33:27 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:51.062 16:33:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:51.062 16:33:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:51.062 16:33:27 -- common/autotest_common.sh@10 -- # set +x 00:18:51.062 ************************************ 00:18:51.062 START TEST version 00:18:51.062 ************************************ 00:18:51.062 16:33:27 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:18:51.322 * Looking for test storage... 00:18:51.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:18:51.322 16:33:27 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:51.322 16:33:27 version -- common/autotest_common.sh@1691 -- # lcov --version 00:18:51.322 16:33:27 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:51.322 16:33:27 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:51.322 16:33:27 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.322 16:33:27 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.322 16:33:27 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.322 16:33:27 version -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.322 16:33:27 version -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.322 16:33:27 version -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.322 16:33:27 version -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.322 16:33:27 version -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.322 16:33:27 version -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.322 16:33:27 version -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.322 16:33:27 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.322 16:33:27 version -- scripts/common.sh@344 -- # case "$op" in 00:18:51.322 16:33:27 version -- scripts/common.sh@345 -- # : 1 00:18:51.322 16:33:27 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.322 16:33:27 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.322 16:33:27 version -- scripts/common.sh@365 -- # decimal 1 00:18:51.322 16:33:27 version -- scripts/common.sh@353 -- # local d=1 00:18:51.322 16:33:27 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.322 16:33:27 version -- scripts/common.sh@355 -- # echo 1 00:18:51.322 16:33:27 version -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.322 16:33:27 version -- scripts/common.sh@366 -- # decimal 2 00:18:51.322 16:33:27 version -- scripts/common.sh@353 -- # local d=2 00:18:51.322 16:33:27 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.322 16:33:27 version -- scripts/common.sh@355 -- # echo 2 00:18:51.322 16:33:27 version -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.322 16:33:27 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.322 16:33:27 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.322 16:33:27 version -- scripts/common.sh@368 -- # return 0 00:18:51.322 16:33:27 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.322 16:33:27 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:51.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.322 --rc genhtml_branch_coverage=1 00:18:51.322 --rc genhtml_function_coverage=1 00:18:51.322 --rc genhtml_legend=1 00:18:51.322 --rc geninfo_all_blocks=1 00:18:51.322 --rc geninfo_unexecuted_blocks=1 00:18:51.322 00:18:51.322 ' 00:18:51.322 16:33:27 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:51.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.322 --rc genhtml_branch_coverage=1 00:18:51.322 --rc genhtml_function_coverage=1 00:18:51.323 --rc genhtml_legend=1 00:18:51.323 --rc geninfo_all_blocks=1 00:18:51.323 --rc geninfo_unexecuted_blocks=1 00:18:51.323 00:18:51.323 ' 00:18:51.323 16:33:27 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:51.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.323 --rc genhtml_branch_coverage=1 00:18:51.323 --rc genhtml_function_coverage=1 00:18:51.323 --rc genhtml_legend=1 00:18:51.323 --rc geninfo_all_blocks=1 00:18:51.323 --rc geninfo_unexecuted_blocks=1 00:18:51.323 00:18:51.323 ' 00:18:51.323 16:33:27 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:51.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.323 --rc genhtml_branch_coverage=1 00:18:51.323 --rc genhtml_function_coverage=1 00:18:51.323 --rc genhtml_legend=1 00:18:51.323 --rc geninfo_all_blocks=1 00:18:51.323 --rc geninfo_unexecuted_blocks=1 00:18:51.323 00:18:51.323 ' 00:18:51.323 16:33:27 version -- app/version.sh@17 -- # get_header_version major 00:18:51.323 16:33:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:51.323 16:33:27 version -- app/version.sh@14 -- # cut -f2 00:18:51.323 16:33:27 version -- app/version.sh@14 -- # tr -d '"' 00:18:51.323 16:33:27 version -- app/version.sh@17 -- # major=25 00:18:51.323 16:33:27 version -- app/version.sh@18 -- # get_header_version minor 00:18:51.323 16:33:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:51.323 16:33:27 version -- app/version.sh@14 -- # tr -d '"' 00:18:51.323 16:33:27 version -- app/version.sh@14 -- # cut -f2 00:18:51.323 16:33:27 version -- app/version.sh@18 -- # minor=1 00:18:51.323 16:33:27 version -- app/version.sh@19 -- # get_header_version patch 00:18:51.323 16:33:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:51.323 16:33:27 version -- app/version.sh@14 -- # cut -f2 00:18:51.323 16:33:27 version -- app/version.sh@14 -- # tr -d '"' 00:18:51.323 16:33:27 version -- app/version.sh@19 -- # patch=0 00:18:51.323 16:33:27 version -- app/version.sh@20 -- # get_header_version suffix 00:18:51.323 16:33:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:18:51.323 16:33:27 version -- app/version.sh@14 -- # cut -f2 00:18:51.323 16:33:27 version -- app/version.sh@14 -- # tr -d '"' 00:18:51.323 16:33:27 version -- app/version.sh@20 -- # suffix=-pre 00:18:51.323 16:33:27 version -- app/version.sh@22 -- # version=25.1 00:18:51.323 16:33:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:18:51.323 16:33:27 version -- app/version.sh@28 -- # version=25.1rc0 00:18:51.323 16:33:27 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:18:51.323 16:33:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:18:51.323 16:33:27 version -- app/version.sh@30 -- # py_version=25.1rc0 00:18:51.323 16:33:27 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:18:51.323 ************************************ 00:18:51.323 END TEST version 00:18:51.323 ************************************ 00:18:51.323 00:18:51.323 real 0m0.318s 00:18:51.323 user 0m0.191s 00:18:51.323 sys 0m0.186s 00:18:51.323 16:33:27 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:51.323 16:33:27 version -- common/autotest_common.sh@10 -- # set +x 00:18:51.583 16:33:27 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:18:51.583 16:33:27 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:18:51.583 16:33:27 -- spdk/autotest.sh@194 -- # uname -s 00:18:51.583 16:33:27 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:18:51.583 16:33:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:51.583 16:33:27 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:51.583 16:33:27 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:18:51.583 16:33:27 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:18:51.583 16:33:27 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:51.583 16:33:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:51.583 16:33:27 -- common/autotest_common.sh@10 -- # set +x 00:18:51.583 ************************************ 00:18:51.583 START TEST blockdev_nvme 00:18:51.583 ************************************ 00:18:51.583 16:33:27 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:18:51.583 * Looking for test storage... 00:18:51.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:51.583 16:33:27 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:51.583 16:33:27 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:18:51.583 16:33:27 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:51.583 16:33:27 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:51.583 16:33:27 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:18:51.583 16:33:27 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:51.583 16:33:27 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:51.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.583 --rc genhtml_branch_coverage=1 00:18:51.583 --rc genhtml_function_coverage=1 00:18:51.583 --rc genhtml_legend=1 00:18:51.583 --rc geninfo_all_blocks=1 00:18:51.583 --rc geninfo_unexecuted_blocks=1 00:18:51.583 00:18:51.583 ' 00:18:51.583 16:33:27 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:51.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.583 --rc genhtml_branch_coverage=1 00:18:51.583 --rc genhtml_function_coverage=1 00:18:51.583 --rc genhtml_legend=1 00:18:51.583 --rc geninfo_all_blocks=1 00:18:51.583 --rc geninfo_unexecuted_blocks=1 00:18:51.583 00:18:51.583 ' 00:18:51.583 16:33:27 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:51.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.583 --rc genhtml_branch_coverage=1 00:18:51.583 --rc genhtml_function_coverage=1 00:18:51.583 --rc genhtml_legend=1 00:18:51.583 --rc geninfo_all_blocks=1 00:18:51.583 --rc geninfo_unexecuted_blocks=1 00:18:51.583 00:18:51.583 ' 00:18:51.583 16:33:27 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:51.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:51.583 --rc genhtml_branch_coverage=1 00:18:51.583 --rc genhtml_function_coverage=1 00:18:51.583 --rc genhtml_legend=1 00:18:51.583 --rc geninfo_all_blocks=1 00:18:51.583 --rc geninfo_unexecuted_blocks=1 00:18:51.583 00:18:51.583 ' 00:18:51.583 16:33:27 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:51.583 16:33:27 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:18:51.583 16:33:27 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:51.583 16:33:27 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:51.583 16:33:27 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:51.583 16:33:27 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:51.583 16:33:27 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:51.583 16:33:27 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:51.583 16:33:27 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:18:51.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60902 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60902 00:18:51.843 16:33:27 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 60902 ']' 00:18:51.843 16:33:27 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.843 16:33:27 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:51.843 16:33:27 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.843 16:33:27 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:51.843 16:33:27 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:51.843 16:33:27 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:51.843 [2024-10-17 16:33:27.996845] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:51.843 [2024-10-17 16:33:27.996979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60902 ] 00:18:52.155 [2024-10-17 16:33:28.169554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.155 [2024-10-17 16:33:28.290517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.095 16:33:29 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:53.095 16:33:29 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:18:53.095 16:33:29 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:53.095 16:33:29 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:18:53.095 16:33:29 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:18:53.095 16:33:29 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:18:53.095 16:33:29 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:53.095 16:33:29 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:18:53.095 16:33:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.095 16:33:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:53.354 16:33:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.354 16:33:29 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:53.354 16:33:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.354 16:33:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:53.354 16:33:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.354 16:33:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:18:53.354 16:33:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:53.354 16:33:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.354 16:33:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:53.354 16:33:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.354 16:33:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:53.354 16:33:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.354 16:33:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:53.614 16:33:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.614 16:33:29 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:53.614 16:33:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.614 16:33:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:53.614 16:33:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.614 16:33:29 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:53.614 16:33:29 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:53.614 16:33:29 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.614 16:33:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:53.614 16:33:29 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:53.614 16:33:29 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.614 16:33:29 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:53.614 16:33:29 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:53.614 16:33:29 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ed416989-5e0c-42b7-8832-100eb5043749"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ed416989-5e0c-42b7-8832-100eb5043749",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "1fc4d44f-b5f0-451f-8e7a-08cfd90fe39c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1fc4d44f-b5f0-451f-8e7a-08cfd90fe39c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "aa764fc0-82bb-4eb1-a735-ee094630c323"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aa764fc0-82bb-4eb1-a735-ee094630c323",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "6516aed2-c157-433f-a1f2-c53028f3fc53"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6516aed2-c157-433f-a1f2-c53028f3fc53",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "2979492e-aaeb-4771-82b8-79ff0e36cb12"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2979492e-aaeb-4771-82b8-79ff0e36cb12",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "01879b1a-f6e3-4388-9129-fe493f47bd9e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "01879b1a-f6e3-4388-9129-fe493f47bd9e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:18:53.615 16:33:29 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:53.615 16:33:29 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:18:53.615 16:33:29 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:53.615 16:33:29 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60902 00:18:53.615 16:33:29 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 60902 ']' 00:18:53.615 16:33:29 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 60902 00:18:53.615 16:33:29 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:18:53.615 16:33:29 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:53.615 16:33:29 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60902 00:18:53.615 16:33:29 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:53.615 16:33:29 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:53.615 killing process with pid 60902 00:18:53.615 16:33:29 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60902' 00:18:53.615 16:33:29 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 60902 00:18:53.615 16:33:29 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 60902 00:18:56.219 16:33:32 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:56.219 16:33:32 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:56.219 16:33:32 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:56.219 16:33:32 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:56.219 16:33:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:56.219 ************************************ 00:18:56.219 START TEST bdev_hello_world 00:18:56.219 ************************************ 00:18:56.219 16:33:32 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:56.219 [2024-10-17 16:33:32.368895] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:56.219 [2024-10-17 16:33:32.369021] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60997 ] 00:18:56.478 [2024-10-17 16:33:32.541771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.478 [2024-10-17 16:33:32.662384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.047 [2024-10-17 16:33:33.319804] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:57.047 [2024-10-17 16:33:33.319859] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:18:57.047 [2024-10-17 16:33:33.319883] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:57.047 [2024-10-17 16:33:33.322943] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:57.047 [2024-10-17 16:33:33.323534] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:57.047 [2024-10-17 16:33:33.323572] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:57.047 [2024-10-17 16:33:33.323964] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:57.047 00:18:57.047 [2024-10-17 16:33:33.324004] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:58.427 00:18:58.427 real 0m2.173s 00:18:58.427 user 0m1.795s 00:18:58.427 sys 0m0.271s 00:18:58.427 16:33:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:58.427 16:33:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:58.427 ************************************ 00:18:58.427 END TEST bdev_hello_world 00:18:58.427 ************************************ 00:18:58.427 16:33:34 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:58.427 16:33:34 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:58.427 16:33:34 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:58.427 16:33:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:58.427 ************************************ 00:18:58.427 START TEST bdev_bounds 00:18:58.427 ************************************ 00:18:58.427 16:33:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:18:58.427 16:33:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61039 00:18:58.427 16:33:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:58.427 16:33:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:58.427 Process bdevio pid: 61039 00:18:58.427 16:33:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61039' 00:18:58.427 16:33:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61039 00:18:58.427 16:33:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61039 ']' 00:18:58.427 16:33:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.427 16:33:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:58.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.427 16:33:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.427 16:33:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:58.427 16:33:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:58.427 [2024-10-17 16:33:34.617595] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:18:58.427 [2024-10-17 16:33:34.617727] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61039 ] 00:18:58.690 [2024-10-17 16:33:34.790343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:58.690 [2024-10-17 16:33:34.910812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.690 [2024-10-17 16:33:34.910916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.690 [2024-10-17 16:33:34.910942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.651 16:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:59.651 16:33:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:18:59.651 16:33:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:59.651 I/O targets: 00:18:59.651 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:18:59.651 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:18:59.651 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:59.651 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:59.651 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:59.651 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:18:59.651 00:18:59.651 00:18:59.651 CUnit - A unit testing framework for C - Version 2.1-3 00:18:59.651 http://cunit.sourceforge.net/ 00:18:59.651 00:18:59.651 00:18:59.651 Suite: bdevio tests on: Nvme3n1 00:18:59.651 Test: blockdev write read block ...passed 00:18:59.651 Test: blockdev write zeroes read block ...passed 00:18:59.651 Test: blockdev write zeroes read no split ...passed 00:18:59.651 Test: blockdev write zeroes read split ...passed 00:18:59.651 Test: blockdev write zeroes read split partial ...passed 00:18:59.651 Test: blockdev reset ...[2024-10-17 16:33:35.771186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:18:59.651 [2024-10-17 16:33:35.775303] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:59.651 passed 00:18:59.651 Test: blockdev write read 8 blocks ...passed 00:18:59.651 Test: blockdev write read size > 128k ...passed 00:18:59.651 Test: blockdev write read invalid size ...passed 00:18:59.651 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:59.652 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:59.652 Test: blockdev write read max offset ...passed 00:18:59.652 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:59.652 Test: blockdev writev readv 8 blocks ...passed 00:18:59.652 Test: blockdev writev readv 30 x 1block ...passed 00:18:59.652 Test: blockdev writev readv block ...passed 00:18:59.652 Test: blockdev writev readv size > 128k ...passed 00:18:59.652 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:59.652 Test: blockdev comparev and writev ...[2024-10-17 16:33:35.784423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bd80a000 len:0x1000 00:18:59.652 [2024-10-17 16:33:35.784473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:59.652 passed 00:18:59.652 Test: blockdev nvme passthru rw ...passed 00:18:59.652 Test: blockdev nvme passthru vendor specific ...[2024-10-17 16:33:35.785464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:59.652 [2024-10-17 16:33:35.785502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:59.652 passed 00:18:59.652 Test: blockdev nvme admin passthru ...passed 00:18:59.652 Test: blockdev copy ...passed 00:18:59.652 Suite: bdevio tests on: Nvme2n3 00:18:59.652 Test: blockdev write read block ...passed 00:18:59.652 Test: blockdev write zeroes read block ...passed 00:18:59.652 Test: blockdev write zeroes read no split ...passed 00:18:59.652 Test: blockdev write zeroes read split ...passed 00:18:59.652 Test: blockdev write zeroes read split partial ...passed 00:18:59.652 Test: blockdev reset ...[2024-10-17 16:33:35.871675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:18:59.652 [2024-10-17 16:33:35.875932] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:59.652 passed 00:18:59.652 Test: blockdev write read 8 blocks ...passed 00:18:59.652 Test: blockdev write read size > 128k ...passed 00:18:59.652 Test: blockdev write read invalid size ...passed 00:18:59.652 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:59.652 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:59.652 Test: blockdev write read max offset ...passed 00:18:59.652 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:59.652 Test: blockdev writev readv 8 blocks ...passed 00:18:59.652 Test: blockdev writev readv 30 x 1block ...passed 00:18:59.652 Test: blockdev writev readv block ...passed 00:18:59.652 Test: blockdev writev readv size > 128k ...passed 00:18:59.652 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:59.652 Test: blockdev comparev and writev ...[2024-10-17 16:33:35.884978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a1206000 len:0x1000 00:18:59.652 [2024-10-17 16:33:35.885028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:59.652 passed 00:18:59.652 Test: blockdev nvme passthru rw ...passed 00:18:59.652 Test: blockdev nvme passthru vendor specific ...[2024-10-17 16:33:35.886005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:59.652 [2024-10-17 16:33:35.886040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:59.652 passed 00:18:59.652 Test: blockdev nvme admin passthru ...passed 00:18:59.652 Test: blockdev copy ...passed 00:18:59.652 Suite: bdevio tests on: Nvme2n2 00:18:59.652 Test: blockdev write read block ...passed 00:18:59.652 Test: blockdev write zeroes read block ...passed 00:18:59.652 Test: blockdev write zeroes read no split ...passed 00:18:59.652 Test: blockdev write zeroes read split ...passed 00:18:59.923 Test: blockdev write zeroes read split partial ...passed 00:18:59.923 Test: blockdev reset ...[2024-10-17 16:33:35.975520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:18:59.923 [2024-10-17 16:33:35.979905] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:59.923 passed 00:18:59.923 Test: blockdev write read 8 blocks ...passed 00:18:59.923 Test: blockdev write read size > 128k ...passed 00:18:59.923 Test: blockdev write read invalid size ...passed 00:18:59.923 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:59.923 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:59.923 Test: blockdev write read max offset ...passed 00:18:59.923 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:59.923 Test: blockdev writev readv 8 blocks ...passed 00:18:59.923 Test: blockdev writev readv 30 x 1block ...passed 00:18:59.923 Test: blockdev writev readv block ...passed 00:18:59.923 Test: blockdev writev readv size > 128k ...passed 00:18:59.923 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:59.923 Test: blockdev comparev and writev ...[2024-10-17 16:33:35.989194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d903c000 len:0x1000 00:18:59.923 [2024-10-17 16:33:35.989241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:59.923 passed 00:18:59.923 Test: blockdev nvme passthru rw ...passed 00:18:59.923 Test: blockdev nvme passthru vendor specific ...[2024-10-17 16:33:35.990135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:59.923 [2024-10-17 16:33:35.990167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:59.923 passed 00:18:59.923 Test: blockdev nvme admin passthru ...passed 00:18:59.923 Test: blockdev copy ...passed 00:18:59.923 Suite: bdevio tests on: Nvme2n1 00:18:59.923 Test: blockdev write read block ...passed 00:18:59.923 Test: blockdev write zeroes read block ...passed 00:18:59.923 Test: blockdev write zeroes read no split ...passed 00:18:59.923 Test: blockdev write zeroes read split ...passed 00:18:59.923 Test: blockdev write zeroes read split partial ...passed 00:18:59.923 Test: blockdev reset ...[2024-10-17 16:33:36.076270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:18:59.923 [2024-10-17 16:33:36.080368] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:59.923 passed 00:18:59.923 Test: blockdev write read 8 blocks ...passed 00:18:59.924 Test: blockdev write read size > 128k ...passed 00:18:59.924 Test: blockdev write read invalid size ...passed 00:18:59.924 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:59.924 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:59.924 Test: blockdev write read max offset ...passed 00:18:59.924 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:59.924 Test: blockdev writev readv 8 blocks ...passed 00:18:59.924 Test: blockdev writev readv 30 x 1block ...passed 00:18:59.924 Test: blockdev writev readv block ...passed 00:18:59.924 Test: blockdev writev readv size > 128k ...passed 00:18:59.924 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:59.924 Test: blockdev comparev and writev ...[2024-10-17 16:33:36.089946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d9038000 len:0x1000 00:18:59.924 [2024-10-17 16:33:36.089998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:59.924 passed 00:18:59.924 Test: blockdev nvme passthru rw ...passed 00:18:59.924 Test: blockdev nvme passthru vendor specific ...[2024-10-17 16:33:36.091018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:59.924 [2024-10-17 16:33:36.091053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:59.924 passed 00:18:59.924 Test: blockdev nvme admin passthru ...passed 00:18:59.924 Test: blockdev copy ...passed 00:18:59.924 Suite: bdevio tests on: Nvme1n1 00:18:59.924 Test: blockdev write read block ...passed 00:18:59.924 Test: blockdev write zeroes read block ...passed 00:18:59.924 Test: blockdev write zeroes read no split ...passed 00:18:59.924 Test: blockdev write zeroes read split ...passed 00:18:59.924 Test: blockdev write zeroes read split partial ...passed 00:18:59.924 Test: blockdev reset ...[2024-10-17 16:33:36.178235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:18:59.924 passed 00:18:59.924 Test: blockdev write read 8 blocks ...[2024-10-17 16:33:36.182028] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:59.924 passed 00:18:59.924 Test: blockdev write read size > 128k ...passed 00:18:59.924 Test: blockdev write read invalid size ...passed 00:18:59.924 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:59.924 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:59.924 Test: blockdev write read max offset ...passed 00:18:59.924 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:59.924 Test: blockdev writev readv 8 blocks ...passed 00:18:59.924 Test: blockdev writev readv 30 x 1block ...passed 00:18:59.924 Test: blockdev writev readv block ...passed 00:18:59.924 Test: blockdev writev readv size > 128k ...passed 00:18:59.924 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:59.924 Test: blockdev comparev and writev ...[2024-10-17 16:33:36.191553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d9034000 len:0x1000 00:18:59.924 [2024-10-17 16:33:36.191606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:59.924 passed 00:18:59.924 Test: blockdev nvme passthru rw ...passed 00:18:59.924 Test: blockdev nvme passthru vendor specific ...[2024-10-17 16:33:36.192645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:59.924 [2024-10-17 16:33:36.192679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:59.924 passed 00:18:59.924 Test: blockdev nvme admin passthru ...passed 00:18:59.924 Test: blockdev copy ...passed 00:18:59.924 Suite: bdevio tests on: Nvme0n1 00:18:59.924 Test: blockdev write read block ...passed 00:18:59.924 Test: blockdev write zeroes read block ...passed 00:18:59.924 Test: blockdev write zeroes read no split ...passed 00:19:00.195 Test: blockdev write zeroes read split ...passed 00:19:00.195 Test: blockdev write zeroes read split partial ...passed 00:19:00.195 Test: blockdev reset ...[2024-10-17 16:33:36.298020] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:19:00.195 passed 00:19:00.195 Test: blockdev write read 8 blocks ...[2024-10-17 16:33:36.301970] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:00.195 passed 00:19:00.195 Test: blockdev write read size > 128k ...passed 00:19:00.195 Test: blockdev write read invalid size ...passed 00:19:00.195 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:00.195 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:00.195 Test: blockdev write read max offset ...passed 00:19:00.195 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:00.195 Test: blockdev writev readv 8 blocks ...passed 00:19:00.195 Test: blockdev writev readv 30 x 1block ...passed 00:19:00.195 Test: blockdev writev readv block ...passed 00:19:00.195 Test: blockdev writev readv size > 128k ...passed 00:19:00.195 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:00.195 Test: blockdev comparev and writev ...passed 00:19:00.195 Test: blockdev nvme passthru rw ...[2024-10-17 16:33:36.309758] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:19:00.195 separate metadata which is not supported yet. 00:19:00.195 passed 00:19:00.195 Test: blockdev nvme passthru vendor specific ...[2024-10-17 16:33:36.310365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:19:00.195 [2024-10-17 16:33:36.310411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:19:00.195 passed 00:19:00.195 Test: blockdev nvme admin passthru ...passed 00:19:00.195 Test: blockdev copy ...passed 00:19:00.195 00:19:00.195 Run Summary: Type Total Ran Passed Failed Inactive 00:19:00.195 suites 6 6 n/a 0 0 00:19:00.195 tests 138 138 138 0 0 00:19:00.195 asserts 893 893 893 0 n/a 00:19:00.195 00:19:00.195 Elapsed time = 1.693 seconds 00:19:00.195 0 00:19:00.195 16:33:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61039 00:19:00.195 16:33:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61039 ']' 00:19:00.195 16:33:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61039 00:19:00.195 16:33:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:19:00.195 16:33:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:00.195 16:33:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61039 00:19:00.195 16:33:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:00.195 16:33:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:00.195 killing process with pid 61039 00:19:00.195 16:33:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61039' 00:19:00.195 16:33:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61039 00:19:00.195 16:33:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61039 00:19:01.132 16:33:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:01.132 00:19:01.132 real 0m2.877s 00:19:01.132 user 0m7.354s 00:19:01.132 sys 0m0.412s 00:19:01.132 16:33:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:01.132 16:33:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:01.132 ************************************ 00:19:01.132 END TEST bdev_bounds 00:19:01.132 ************************************ 00:19:01.423 16:33:37 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:19:01.423 16:33:37 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:01.423 16:33:37 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:01.423 16:33:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:01.423 ************************************ 00:19:01.423 START TEST bdev_nbd 00:19:01.423 ************************************ 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61104 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61104 /var/tmp/spdk-nbd.sock 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61104 ']' 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.423 16:33:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:01.423 [2024-10-17 16:33:37.575765] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:19:01.423 [2024-10-17 16:33:37.575889] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.682 [2024-10-17 16:33:37.750929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.682 [2024-10-17 16:33:37.869001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.619 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.619 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:19:02.619 16:33:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:19:02.619 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.619 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:02.619 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:02.619 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:19:02.619 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:02.619 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:02.619 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:02.619 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:02.620 1+0 records in 00:19:02.620 1+0 records out 00:19:02.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000554786 s, 7.4 MB/s 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:02.620 16:33:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:02.879 1+0 records in 00:19:02.879 1+0 records out 00:19:02.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000749993 s, 5.5 MB/s 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:02.879 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:03.138 1+0 records in 00:19:03.138 1+0 records out 00:19:03.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065384 s, 6.3 MB/s 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.138 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:03.400 1+0 records in 00:19:03.400 1+0 records out 00:19:03.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765648 s, 5.3 MB/s 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:03.400 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:03.659 1+0 records in 00:19:03.659 1+0 records out 00:19:03.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733883 s, 5.6 MB/s 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:03.659 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.924 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:03.924 16:33:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:03.924 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:03.924 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:03.924 16:33:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:19:03.924 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:03.924 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:03.924 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:03.924 16:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:19:03.924 16:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:03.924 16:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:03.924 16:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:03.924 16:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:19:03.924 16:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:03.925 16:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:03.925 16:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:03.925 16:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:03.925 1+0 records in 00:19:03.925 1+0 records out 00:19:03.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798698 s, 5.1 MB/s 00:19:03.925 16:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:03.925 16:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:03.925 16:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:04.183 16:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:04.183 16:33:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:04.183 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:04.183 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:04.183 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:04.183 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:04.183 { 00:19:04.183 "nbd_device": "/dev/nbd0", 00:19:04.183 "bdev_name": "Nvme0n1" 00:19:04.183 }, 00:19:04.183 { 00:19:04.183 "nbd_device": "/dev/nbd1", 00:19:04.183 "bdev_name": "Nvme1n1" 00:19:04.183 }, 00:19:04.183 { 00:19:04.183 "nbd_device": "/dev/nbd2", 00:19:04.183 "bdev_name": "Nvme2n1" 00:19:04.183 }, 00:19:04.183 { 00:19:04.183 "nbd_device": "/dev/nbd3", 00:19:04.183 "bdev_name": "Nvme2n2" 00:19:04.183 }, 00:19:04.183 { 00:19:04.183 "nbd_device": "/dev/nbd4", 00:19:04.183 "bdev_name": "Nvme2n3" 00:19:04.183 }, 00:19:04.183 { 00:19:04.183 "nbd_device": "/dev/nbd5", 00:19:04.183 "bdev_name": "Nvme3n1" 00:19:04.183 } 00:19:04.183 ]' 00:19:04.183 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:04.183 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:04.183 { 00:19:04.183 "nbd_device": "/dev/nbd0", 00:19:04.183 "bdev_name": "Nvme0n1" 00:19:04.183 }, 00:19:04.183 { 00:19:04.183 "nbd_device": "/dev/nbd1", 00:19:04.183 "bdev_name": "Nvme1n1" 00:19:04.183 }, 00:19:04.183 { 00:19:04.183 "nbd_device": "/dev/nbd2", 00:19:04.183 "bdev_name": "Nvme2n1" 00:19:04.183 }, 00:19:04.183 { 00:19:04.183 "nbd_device": "/dev/nbd3", 00:19:04.183 "bdev_name": "Nvme2n2" 00:19:04.183 }, 00:19:04.183 { 00:19:04.183 "nbd_device": "/dev/nbd4", 00:19:04.183 "bdev_name": "Nvme2n3" 00:19:04.183 }, 00:19:04.183 { 00:19:04.183 "nbd_device": "/dev/nbd5", 00:19:04.183 "bdev_name": "Nvme3n1" 00:19:04.183 } 00:19:04.183 ]' 00:19:04.183 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.442 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:04.700 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:04.700 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:04.700 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:04.700 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.700 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.700 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:04.701 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:04.701 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.701 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.701 16:33:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:04.959 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:04.959 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:04.959 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:04.959 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:04.959 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:04.959 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:04.959 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:04.959 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:04.959 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:04.959 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:05.218 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:05.218 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:05.218 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:05.218 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.218 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.218 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:05.218 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:05.218 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.218 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:05.218 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:05.477 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:05.477 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:05.477 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:05.477 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.477 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.477 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:05.477 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:05.477 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.477 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:05.477 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:05.736 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:05.736 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:05.736 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:05.736 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:05.736 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:05.736 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:05.736 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:05.736 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:05.736 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:05.736 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.736 16:33:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:05.736 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:05.736 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:05.736 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:05.996 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:19:05.996 /dev/nbd0 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.256 1+0 records in 00:19:06.256 1+0 records out 00:19:06.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642638 s, 6.4 MB/s 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:06.256 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:19:06.515 /dev/nbd1 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.515 1+0 records in 00:19:06.515 1+0 records out 00:19:06.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058446 s, 7.0 MB/s 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:06.515 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:19:06.834 /dev/nbd10 00:19:06.834 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:19:06.834 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:19:06.834 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:19:06.834 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:06.834 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:06.834 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:06.834 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:19:06.834 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:06.834 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:06.835 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:06.835 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.835 1+0 records in 00:19:06.835 1+0 records out 00:19:06.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466127 s, 8.8 MB/s 00:19:06.835 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.835 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:06.835 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.835 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:06.835 16:33:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:06.835 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:06.835 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:06.835 16:33:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:19:07.093 /dev/nbd11 00:19:07.093 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:19:07.093 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:19:07.093 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:19:07.093 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:07.093 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:07.093 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:07.093 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:19:07.093 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:07.093 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:07.093 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:07.093 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:07.093 1+0 records in 00:19:07.093 1+0 records out 00:19:07.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000884702 s, 4.6 MB/s 00:19:07.094 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.094 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:07.094 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.094 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:07.094 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:07.094 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:07.094 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:07.094 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:19:07.094 /dev/nbd12 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:07.353 1+0 records in 00:19:07.353 1+0 records out 00:19:07.353 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567986 s, 7.2 MB/s 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:07.353 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:19:07.353 /dev/nbd13 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:07.613 1+0 records in 00:19:07.613 1+0 records out 00:19:07.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654623 s, 6.3 MB/s 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:07.613 { 00:19:07.613 "nbd_device": "/dev/nbd0", 00:19:07.613 "bdev_name": "Nvme0n1" 00:19:07.613 }, 00:19:07.613 { 00:19:07.613 "nbd_device": "/dev/nbd1", 00:19:07.613 "bdev_name": "Nvme1n1" 00:19:07.613 }, 00:19:07.613 { 00:19:07.613 "nbd_device": "/dev/nbd10", 00:19:07.613 "bdev_name": "Nvme2n1" 00:19:07.613 }, 00:19:07.613 { 00:19:07.613 "nbd_device": "/dev/nbd11", 00:19:07.613 "bdev_name": "Nvme2n2" 00:19:07.613 }, 00:19:07.613 { 00:19:07.613 "nbd_device": "/dev/nbd12", 00:19:07.613 "bdev_name": "Nvme2n3" 00:19:07.613 }, 00:19:07.613 { 00:19:07.613 "nbd_device": "/dev/nbd13", 00:19:07.613 "bdev_name": "Nvme3n1" 00:19:07.613 } 00:19:07.613 ]' 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:07.613 { 00:19:07.613 "nbd_device": "/dev/nbd0", 00:19:07.613 "bdev_name": "Nvme0n1" 00:19:07.613 }, 00:19:07.613 { 00:19:07.613 "nbd_device": "/dev/nbd1", 00:19:07.613 "bdev_name": "Nvme1n1" 00:19:07.613 }, 00:19:07.613 { 00:19:07.613 "nbd_device": "/dev/nbd10", 00:19:07.613 "bdev_name": "Nvme2n1" 00:19:07.613 }, 00:19:07.613 { 00:19:07.613 "nbd_device": "/dev/nbd11", 00:19:07.613 "bdev_name": "Nvme2n2" 00:19:07.613 }, 00:19:07.613 { 00:19:07.613 "nbd_device": "/dev/nbd12", 00:19:07.613 "bdev_name": "Nvme2n3" 00:19:07.613 }, 00:19:07.613 { 00:19:07.613 "nbd_device": "/dev/nbd13", 00:19:07.613 "bdev_name": "Nvme3n1" 00:19:07.613 } 00:19:07.613 ]' 00:19:07.613 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:07.872 /dev/nbd1 00:19:07.872 /dev/nbd10 00:19:07.872 /dev/nbd11 00:19:07.872 /dev/nbd12 00:19:07.872 /dev/nbd13' 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:07.872 /dev/nbd1 00:19:07.872 /dev/nbd10 00:19:07.872 /dev/nbd11 00:19:07.872 /dev/nbd12 00:19:07.872 /dev/nbd13' 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:07.872 256+0 records in 00:19:07.872 256+0 records out 00:19:07.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120885 s, 86.7 MB/s 00:19:07.872 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:07.873 16:33:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:07.873 256+0 records in 00:19:07.873 256+0 records out 00:19:07.873 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124091 s, 8.5 MB/s 00:19:07.873 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:07.873 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:08.131 256+0 records in 00:19:08.131 256+0 records out 00:19:08.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126085 s, 8.3 MB/s 00:19:08.131 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:08.131 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:19:08.131 256+0 records in 00:19:08.131 256+0 records out 00:19:08.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127756 s, 8.2 MB/s 00:19:08.131 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:08.131 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:19:08.390 256+0 records in 00:19:08.390 256+0 records out 00:19:08.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129306 s, 8.1 MB/s 00:19:08.390 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:08.390 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:19:08.390 256+0 records in 00:19:08.390 256+0 records out 00:19:08.390 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124587 s, 8.4 MB/s 00:19:08.390 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:08.390 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:19:08.647 256+0 records in 00:19:08.647 256+0 records out 00:19:08.647 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129386 s, 8.1 MB/s 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:19:08.647 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:08.648 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:19:08.648 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:08.648 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:19:08.648 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:08.648 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:08.648 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.648 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:08.648 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:08.648 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:08.648 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:08.648 16:33:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:08.906 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:08.906 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:08.906 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:08.906 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:08.906 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:08.906 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:08.906 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:08.906 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:08.906 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:08.906 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:09.165 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:09.165 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:09.165 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:09.165 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.165 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.165 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:09.165 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.165 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.165 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.165 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:19:09.424 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:19:09.424 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:19:09.424 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:19:09.424 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.424 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.424 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:19:09.424 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.424 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.424 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.424 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.683 16:33:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:19:09.943 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:19:09.943 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:19:09.943 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:19:09.943 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.943 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.943 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:19:09.943 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.943 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.943 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:09.943 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.943 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:10.202 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:10.202 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:10.202 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:10.202 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:10.202 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:10.202 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:10.461 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:10.461 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:10.461 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:10.461 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:10.461 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:10.461 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:10.461 16:33:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:10.461 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.461 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:10.461 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:10.721 malloc_lvol_verify 00:19:10.721 16:33:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:10.721 95002b3b-b7f1-433d-8a3e-0dfa4f17adc4 00:19:10.981 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:10.981 e2916f8d-64c0-41dd-b0d7-0024c80f8c42 00:19:10.981 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:11.244 /dev/nbd0 00:19:11.244 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:11.244 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:11.244 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:11.244 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:11.244 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:11.244 mke2fs 1.47.0 (5-Feb-2023) 00:19:11.244 Discarding device blocks: 0/4096 done 00:19:11.244 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:11.244 00:19:11.244 Allocating group tables: 0/1 done 00:19:11.244 Writing inode tables: 0/1 done 00:19:11.244 Creating journal (1024 blocks): done 00:19:11.244 Writing superblocks and filesystem accounting information: 0/1 done 00:19:11.244 00:19:11.244 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:11.244 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:11.244 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:11.244 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:11.244 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:11.244 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:11.244 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61104 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61104 ']' 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61104 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61104 00:19:11.503 killing process with pid 61104 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61104' 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61104 00:19:11.503 16:33:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61104 00:19:12.902 16:33:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:12.902 00:19:12.902 real 0m11.486s 00:19:12.902 user 0m14.971s 00:19:12.902 sys 0m4.700s 00:19:12.902 16:33:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:12.902 ************************************ 00:19:12.902 END TEST bdev_nbd 00:19:12.902 ************************************ 00:19:12.902 16:33:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:12.902 skipping fio tests on NVMe due to multi-ns failures. 00:19:12.902 16:33:49 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:19:12.902 16:33:49 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:19:12.902 16:33:49 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:19:12.902 16:33:49 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:12.902 16:33:49 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:12.902 16:33:49 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:12.902 16:33:49 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:12.902 16:33:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:12.902 ************************************ 00:19:12.902 START TEST bdev_verify 00:19:12.902 ************************************ 00:19:12.902 16:33:49 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:12.902 [2024-10-17 16:33:49.123398] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:19:12.902 [2024-10-17 16:33:49.123746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61490 ] 00:19:13.162 [2024-10-17 16:33:49.295454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:13.162 [2024-10-17 16:33:49.422222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.162 [2024-10-17 16:33:49.422252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.141 Running I/O for 5 seconds... 00:19:16.014 21312.00 IOPS, 83.25 MiB/s [2024-10-17T16:33:53.694Z] 22368.00 IOPS, 87.38 MiB/s [2024-10-17T16:33:54.630Z] 21973.33 IOPS, 85.83 MiB/s [2024-10-17T16:33:55.567Z] 22048.00 IOPS, 86.12 MiB/s [2024-10-17T16:33:55.567Z] 21427.20 IOPS, 83.70 MiB/s 00:19:19.268 Latency(us) 00:19:19.268 [2024-10-17T16:33:55.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.268 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:19.268 Verification LBA range: start 0x0 length 0xbd0bd 00:19:19.268 Nvme0n1 : 5.04 1751.79 6.84 0.00 0.00 72766.20 15054.86 91381.92 00:19:19.268 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:19.268 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:19.268 Nvme0n1 : 5.07 1768.96 6.91 0.00 0.00 72182.64 16002.36 92224.15 00:19:19.268 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:19.268 Verification LBA range: start 0x0 length 0xa0000 00:19:19.268 Nvme1n1 : 5.07 1754.85 6.85 0.00 0.00 72498.96 8738.13 83801.86 00:19:19.268 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:19.268 Verification LBA range: start 0xa0000 length 0xa0000 00:19:19.268 Nvme1n1 : 5.07 1768.44 6.91 0.00 0.00 72066.33 16528.76 91381.92 00:19:19.268 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:19.268 Verification LBA range: start 0x0 length 0x80000 00:19:19.268 Nvme2n1 : 5.09 1761.71 6.88 0.00 0.00 72143.11 12896.64 71589.53 00:19:19.268 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:19.268 Verification LBA range: start 0x80000 length 0x80000 00:19:19.268 Nvme2n1 : 5.07 1767.96 6.91 0.00 0.00 71839.82 16212.92 91803.04 00:19:19.268 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:19.268 Verification LBA range: start 0x0 length 0x80000 00:19:19.268 Nvme2n2 : 5.09 1761.24 6.88 0.00 0.00 72001.78 12949.28 69483.95 00:19:19.268 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:19.268 Verification LBA range: start 0x80000 length 0x80000 00:19:19.268 Nvme2n2 : 5.07 1767.43 6.90 0.00 0.00 71699.32 15160.13 90118.58 00:19:19.268 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:19.268 Verification LBA range: start 0x0 length 0x80000 00:19:19.268 Nvme2n3 : 5.09 1760.77 6.88 0.00 0.00 71878.43 13159.84 71589.53 00:19:19.268 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:19.268 Verification LBA range: start 0x80000 length 0x80000 00:19:19.268 Nvme2n3 : 5.07 1766.54 6.90 0.00 0.00 71581.13 14739.02 90960.81 00:19:19.268 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:19.268 Verification LBA range: start 0x0 length 0x20000 00:19:19.268 Nvme3n1 : 5.09 1760.32 6.88 0.00 0.00 71717.34 12475.53 70326.18 00:19:19.268 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:19.268 Verification LBA range: start 0x20000 length 0x20000 00:19:19.268 Nvme3n1 : 5.08 1775.61 6.94 0.00 0.00 71067.31 4711.22 88434.12 00:19:19.268 [2024-10-17T16:33:55.567Z] =================================================================================================================== 00:19:19.268 [2024-10-17T16:33:55.567Z] Total : 21165.62 82.68 0.00 0.00 71951.71 4711.22 92224.15 00:19:20.643 00:19:20.643 real 0m7.745s 00:19:20.643 user 0m14.312s 00:19:20.643 sys 0m0.312s 00:19:20.643 16:33:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:20.643 ************************************ 00:19:20.643 16:33:56 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:20.643 END TEST bdev_verify 00:19:20.643 ************************************ 00:19:20.643 16:33:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:20.643 16:33:56 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:19:20.643 16:33:56 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:20.643 16:33:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.643 ************************************ 00:19:20.643 START TEST bdev_verify_big_io 00:19:20.643 ************************************ 00:19:20.643 16:33:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:20.901 [2024-10-17 16:33:56.941108] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:19:20.902 [2024-10-17 16:33:56.941257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61599 ] 00:19:20.902 [2024-10-17 16:33:57.115619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:21.158 [2024-10-17 16:33:57.239819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.158 [2024-10-17 16:33:57.239851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.092 Running I/O for 5 seconds... 00:19:25.374 1527.00 IOPS, 95.44 MiB/s [2024-10-17T16:34:02.627Z] 2094.50 IOPS, 130.91 MiB/s [2024-10-17T16:34:03.567Z] 2313.00 IOPS, 144.56 MiB/s [2024-10-17T16:34:03.826Z] 2196.00 IOPS, 137.25 MiB/s [2024-10-17T16:34:04.085Z] 2347.20 IOPS, 146.70 MiB/s 00:19:27.786 Latency(us) 00:19:27.786 [2024-10-17T16:34:04.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.786 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:27.786 Verification LBA range: start 0x0 length 0xbd0b 00:19:27.786 Nvme0n1 : 5.54 180.36 11.27 0.00 0.00 682018.49 13265.12 1165645.93 00:19:27.786 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:27.786 Verification LBA range: start 0xbd0b length 0xbd0b 00:19:27.786 Nvme0n1 : 5.60 181.81 11.36 0.00 0.00 692647.62 15791.81 923083.77 00:19:27.786 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:27.786 Verification LBA range: start 0x0 length 0xa000 00:19:27.786 Nvme1n1 : 5.44 180.10 11.26 0.00 0.00 668497.06 85907.43 1030889.18 00:19:27.786 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:27.786 Verification LBA range: start 0xa000 length 0xa000 00:19:27.786 Nvme1n1 : 5.61 178.84 11.18 0.00 0.00 685947.25 16844.59 936559.45 00:19:27.786 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:27.786 Verification LBA range: start 0x0 length 0x8000 00:19:27.786 Nvme2n1 : 5.66 185.24 11.58 0.00 0.00 620129.17 34531.42 1266713.50 00:19:27.786 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:27.786 Verification LBA range: start 0x8000 length 0x8000 00:19:27.786 Nvme2n1 : 5.61 178.09 11.13 0.00 0.00 677656.85 18002.66 963510.80 00:19:27.786 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:27.786 Verification LBA range: start 0x0 length 0x8000 00:19:27.786 Nvme2n2 : 5.71 198.57 12.41 0.00 0.00 567851.39 25582.73 1286927.01 00:19:27.786 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:27.786 Verification LBA range: start 0x8000 length 0x8000 00:19:27.787 Nvme2n2 : 5.61 178.76 11.17 0.00 0.00 664958.68 20634.63 751268.91 00:19:27.787 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:27.787 Verification LBA range: start 0x0 length 0x8000 00:19:27.787 Nvme2n3 : 5.83 228.14 14.26 0.00 0.00 474454.50 8632.85 1300402.69 00:19:27.787 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:27.787 Verification LBA range: start 0x8000 length 0x8000 00:19:27.787 Nvme2n3 : 5.61 178.02 11.13 0.00 0.00 656622.66 21266.30 1003937.82 00:19:27.787 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:27.787 Verification LBA range: start 0x0 length 0x2000 00:19:27.787 Nvme3n1 : 6.01 332.87 20.80 0.00 0.00 313660.38 806.04 1347567.55 00:19:27.787 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:27.787 Verification LBA range: start 0x2000 length 0x2000 00:19:27.787 Nvme3n1 : 5.61 178.16 11.14 0.00 0.00 645443.72 13001.92 1010675.66 00:19:27.787 [2024-10-17T16:34:04.086Z] =================================================================================================================== 00:19:27.787 [2024-10-17T16:34:04.086Z] Total : 2378.95 148.68 0.00 0.00 586605.46 806.04 1347567.55 00:19:30.324 ************************************ 00:19:30.324 END TEST bdev_verify_big_io 00:19:30.324 ************************************ 00:19:30.324 00:19:30.324 real 0m9.336s 00:19:30.324 user 0m17.439s 00:19:30.324 sys 0m0.343s 00:19:30.324 16:34:06 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:30.324 16:34:06 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:30.324 16:34:06 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:30.324 16:34:06 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:30.324 16:34:06 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:30.324 16:34:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:30.324 ************************************ 00:19:30.324 START TEST bdev_write_zeroes 00:19:30.324 ************************************ 00:19:30.324 16:34:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:30.324 [2024-10-17 16:34:06.353036] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:19:30.324 [2024-10-17 16:34:06.353159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61714 ] 00:19:30.324 [2024-10-17 16:34:06.524752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.584 [2024-10-17 16:34:06.679523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.151 Running I/O for 1 seconds... 00:19:32.345 72576.00 IOPS, 283.50 MiB/s 00:19:32.345 Latency(us) 00:19:32.345 [2024-10-17T16:34:08.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.345 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:32.345 Nvme0n1 : 1.02 12036.07 47.02 0.00 0.00 10607.76 8369.66 24845.78 00:19:32.345 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:32.345 Nvme1n1 : 1.02 12023.57 46.97 0.00 0.00 10605.28 8685.49 25056.33 00:19:32.345 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:32.345 Nvme2n1 : 1.02 12011.45 46.92 0.00 0.00 10580.25 8685.49 23792.99 00:19:32.345 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:32.345 Nvme2n2 : 1.02 12000.59 46.88 0.00 0.00 10522.94 8580.22 19055.45 00:19:32.345 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:32.345 Nvme2n3 : 1.03 12044.74 47.05 0.00 0.00 10479.86 5895.61 17792.10 00:19:32.345 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:32.345 Nvme3n1 : 1.03 12032.92 47.00 0.00 0.00 10461.93 6000.89 19266.00 00:19:32.345 [2024-10-17T16:34:08.644Z] =================================================================================================================== 00:19:32.345 [2024-10-17T16:34:08.644Z] Total : 72149.33 281.83 0.00 0.00 10542.88 5895.61 25056.33 00:19:33.737 00:19:33.737 real 0m3.374s 00:19:33.737 user 0m2.964s 00:19:33.737 sys 0m0.290s 00:19:33.737 16:34:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:33.737 ************************************ 00:19:33.737 END TEST bdev_write_zeroes 00:19:33.737 ************************************ 00:19:33.737 16:34:09 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:33.737 16:34:09 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:33.737 16:34:09 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:33.737 16:34:09 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:33.737 16:34:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:33.737 ************************************ 00:19:33.737 START TEST bdev_json_nonenclosed 00:19:33.737 ************************************ 00:19:33.737 16:34:09 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:33.737 [2024-10-17 16:34:09.800914] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:19:33.737 [2024-10-17 16:34:09.801232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61772 ] 00:19:33.737 [2024-10-17 16:34:09.976868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.995 [2024-10-17 16:34:10.104765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.995 [2024-10-17 16:34:10.104856] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:33.995 [2024-10-17 16:34:10.104877] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:33.995 [2024-10-17 16:34:10.104891] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:34.253 00:19:34.253 real 0m0.685s 00:19:34.253 user 0m0.430s 00:19:34.253 sys 0m0.149s 00:19:34.253 ************************************ 00:19:34.253 END TEST bdev_json_nonenclosed 00:19:34.253 ************************************ 00:19:34.253 16:34:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:34.253 16:34:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:34.253 16:34:10 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:34.253 16:34:10 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:19:34.253 16:34:10 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:34.253 16:34:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:34.253 ************************************ 00:19:34.253 START TEST bdev_json_nonarray 00:19:34.253 ************************************ 00:19:34.253 16:34:10 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:34.512 [2024-10-17 16:34:10.558628] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:19:34.512 [2024-10-17 16:34:10.559086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61798 ] 00:19:34.512 [2024-10-17 16:34:10.749404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.770 [2024-10-17 16:34:10.882128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.770 [2024-10-17 16:34:10.882244] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:34.770 [2024-10-17 16:34:10.882267] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:34.770 [2024-10-17 16:34:10.882279] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:35.028 00:19:35.028 real 0m0.716s 00:19:35.028 user 0m0.452s 00:19:35.028 sys 0m0.157s 00:19:35.028 16:34:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:35.028 ************************************ 00:19:35.028 END TEST bdev_json_nonarray 00:19:35.028 ************************************ 00:19:35.028 16:34:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:35.028 16:34:11 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:19:35.028 16:34:11 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:19:35.028 16:34:11 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:19:35.028 16:34:11 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:35.028 16:34:11 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:19:35.028 16:34:11 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:35.028 16:34:11 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:35.028 16:34:11 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:19:35.028 16:34:11 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:19:35.028 16:34:11 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:19:35.028 16:34:11 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:19:35.028 00:19:35.028 real 0m43.600s 00:19:35.028 user 1m4.478s 00:19:35.028 sys 0m7.809s 00:19:35.028 16:34:11 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:35.028 16:34:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:35.028 ************************************ 00:19:35.028 END TEST blockdev_nvme 00:19:35.028 ************************************ 00:19:35.028 16:34:11 -- spdk/autotest.sh@209 -- # uname -s 00:19:35.028 16:34:11 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:19:35.028 16:34:11 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:19:35.028 16:34:11 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:35.028 16:34:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:35.028 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:19:35.028 ************************************ 00:19:35.028 START TEST blockdev_nvme_gpt 00:19:35.028 ************************************ 00:19:35.028 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:19:35.286 * Looking for test storage... 00:19:35.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:35.286 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:35.286 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:19:35.286 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:35.286 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:35.286 16:34:11 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:19:35.286 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.286 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:35.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.286 --rc genhtml_branch_coverage=1 00:19:35.286 --rc genhtml_function_coverage=1 00:19:35.286 --rc genhtml_legend=1 00:19:35.286 --rc geninfo_all_blocks=1 00:19:35.286 --rc geninfo_unexecuted_blocks=1 00:19:35.286 00:19:35.286 ' 00:19:35.286 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:35.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.286 --rc genhtml_branch_coverage=1 00:19:35.286 --rc genhtml_function_coverage=1 00:19:35.286 --rc genhtml_legend=1 00:19:35.286 --rc geninfo_all_blocks=1 00:19:35.286 --rc geninfo_unexecuted_blocks=1 00:19:35.286 00:19:35.286 ' 00:19:35.286 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:35.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.286 --rc genhtml_branch_coverage=1 00:19:35.286 --rc genhtml_function_coverage=1 00:19:35.286 --rc genhtml_legend=1 00:19:35.286 --rc geninfo_all_blocks=1 00:19:35.286 --rc geninfo_unexecuted_blocks=1 00:19:35.286 00:19:35.286 ' 00:19:35.286 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:35.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.286 --rc genhtml_branch_coverage=1 00:19:35.286 --rc genhtml_function_coverage=1 00:19:35.286 --rc genhtml_legend=1 00:19:35.286 --rc geninfo_all_blocks=1 00:19:35.286 --rc geninfo_unexecuted_blocks=1 00:19:35.286 00:19:35.286 ' 00:19:35.286 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:35.286 16:34:11 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:19:35.286 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:35.286 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:35.286 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:35.286 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:35.286 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:35.286 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:35.286 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:19:35.286 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:35.286 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:35.286 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:35.286 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:19:35.543 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:35.543 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:35.543 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:19:35.543 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:35.543 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:19:35.543 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:35.543 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:35.543 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:35.543 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:19:35.544 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:19:35.544 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:35.544 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61882 00:19:35.544 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:35.544 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:35.544 16:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61882 00:19:35.544 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 61882 ']' 00:19:35.544 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.544 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:35.544 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.544 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:35.544 16:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:19:35.544 [2024-10-17 16:34:11.712133] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:19:35.544 [2024-10-17 16:34:11.712270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61882 ] 00:19:35.801 [2024-10-17 16:34:11.888378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.801 [2024-10-17 16:34:12.020902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.760 16:34:12 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:36.760 16:34:12 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:19:36.760 16:34:12 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:36.760 16:34:12 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:19:36.760 16:34:12 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:37.328 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:37.585 Waiting for block devices as requested 00:19:37.842 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:37.842 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:37.842 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:19:38.100 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:19:43.369 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:19:43.369 BYT; 00:19:43.369 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:19:43.369 BYT; 00:19:43.369 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:19:43.369 16:34:19 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:19:43.369 16:34:19 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:19:44.308 The operation has completed successfully. 00:19:44.308 16:34:20 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:19:45.243 The operation has completed successfully. 00:19:45.243 16:34:21 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:46.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:46.748 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:46.748 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:46.748 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:46.748 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:47.006 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:19:47.006 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.006 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:19:47.006 [] 00:19:47.006 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.006 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:19:47.006 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:19:47.006 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:19:47.006 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:47.006 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:19:47.006 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.006 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:19:47.265 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.265 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:47.265 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.265 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:19:47.265 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.265 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:19:47.265 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:47.265 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.265 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:19:47.265 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.265 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:47.265 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.265 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:19:47.265 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.265 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:47.265 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.265 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:19:47.265 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.524 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:47.524 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:47.524 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.524 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:19:47.524 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:47.524 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.524 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:47.525 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "945c8da8-cd43-4db6-adc3-9ea2fe8bfa82"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "945c8da8-cd43-4db6-adc3-9ea2fe8bfa82",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "705bce84-baee-4ae7-ba35-d48b96c736fe"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "705bce84-baee-4ae7-ba35-d48b96c736fe",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b22ffaf8-b406-4eda-a14c-90ad989eeb5e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b22ffaf8-b406-4eda-a14c-90ad989eeb5e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "1d191485-6d29-4d43-b688-9ca557bc80f7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1d191485-6d29-4d43-b688-9ca557bc80f7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1332565b-5914-4cdd-a331-6426dedbd1a2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1332565b-5914-4cdd-a331-6426dedbd1a2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:19:47.525 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:47.525 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:47.525 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:19:47.525 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:47.525 16:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61882 00:19:47.525 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 61882 ']' 00:19:47.525 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 61882 00:19:47.525 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:19:47.525 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.525 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61882 00:19:47.525 killing process with pid 61882 00:19:47.525 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:47.525 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:47.525 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61882' 00:19:47.525 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 61882 00:19:47.525 16:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 61882 00:19:50.060 16:34:26 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:50.060 16:34:26 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:19:50.060 16:34:26 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:50.060 16:34:26 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.060 16:34:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:19:50.060 ************************************ 00:19:50.060 START TEST bdev_hello_world 00:19:50.060 ************************************ 00:19:50.060 16:34:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:19:50.319 [2024-10-17 16:34:26.389329] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:19:50.319 [2024-10-17 16:34:26.389461] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62528 ] 00:19:50.319 [2024-10-17 16:34:26.561618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.578 [2024-10-17 16:34:26.686416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.146 [2024-10-17 16:34:27.348900] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:51.146 [2024-10-17 16:34:27.349134] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:19:51.146 [2024-10-17 16:34:27.349169] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:51.146 [2024-10-17 16:34:27.352242] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:51.146 [2024-10-17 16:34:27.352774] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:51.146 [2024-10-17 16:34:27.352810] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:51.146 [2024-10-17 16:34:27.352939] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:51.146 00:19:51.146 [2024-10-17 16:34:27.352967] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:52.525 00:19:52.525 real 0m2.194s 00:19:52.525 user 0m1.820s 00:19:52.525 sys 0m0.263s 00:19:52.525 ************************************ 00:19:52.525 END TEST bdev_hello_world 00:19:52.525 ************************************ 00:19:52.525 16:34:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:52.525 16:34:28 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:52.525 16:34:28 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:52.525 16:34:28 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:52.525 16:34:28 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:52.525 16:34:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:19:52.525 ************************************ 00:19:52.525 START TEST bdev_bounds 00:19:52.525 ************************************ 00:19:52.525 16:34:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:19:52.525 16:34:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:52.525 16:34:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62570 00:19:52.525 Process bdevio pid: 62570 00:19:52.525 16:34:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:52.525 16:34:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62570' 00:19:52.525 16:34:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62570 00:19:52.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.525 16:34:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 62570 ']' 00:19:52.525 16:34:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.525 16:34:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.525 16:34:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.525 16:34:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.525 16:34:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:52.525 [2024-10-17 16:34:28.657933] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:19:52.525 [2024-10-17 16:34:28.658056] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62570 ] 00:19:52.784 [2024-10-17 16:34:28.833892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:52.784 [2024-10-17 16:34:28.960887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.785 [2024-10-17 16:34:28.960977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.785 [2024-10-17 16:34:28.961006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.440 16:34:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.440 16:34:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:19:53.440 16:34:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:53.700 I/O targets: 00:19:53.700 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:53.700 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:19:53.700 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:19:53.700 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:53.700 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:53.700 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:53.700 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:53.700 00:19:53.700 00:19:53.700 CUnit - A unit testing framework for C - Version 2.1-3 00:19:53.700 http://cunit.sourceforge.net/ 00:19:53.700 00:19:53.700 00:19:53.700 Suite: bdevio tests on: Nvme3n1 00:19:53.700 Test: blockdev write read block ...passed 00:19:53.700 Test: blockdev write zeroes read block ...passed 00:19:53.700 Test: blockdev write zeroes read no split ...passed 00:19:53.700 Test: blockdev write zeroes read split ...passed 00:19:53.700 Test: blockdev write zeroes read split partial ...passed 00:19:53.700 Test: blockdev reset ...[2024-10-17 16:34:29.839919] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:19:53.700 [2024-10-17 16:34:29.843759] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:53.700 passed 00:19:53.700 Test: blockdev write read 8 blocks ...passed 00:19:53.700 Test: blockdev write read size > 128k ...passed 00:19:53.700 Test: blockdev write read invalid size ...passed 00:19:53.700 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:53.700 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:53.700 Test: blockdev write read max offset ...passed 00:19:53.700 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:53.700 Test: blockdev writev readv 8 blocks ...passed 00:19:53.700 Test: blockdev writev readv 30 x 1block ...passed 00:19:53.700 Test: blockdev writev readv block ...passed 00:19:53.700 Test: blockdev writev readv size > 128k ...passed 00:19:53.700 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:53.700 Test: blockdev comparev and writev ...[2024-10-17 16:34:29.853108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb804000 len:0x1000 00:19:53.700 [2024-10-17 16:34:29.853161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:53.700 passed 00:19:53.700 Test: blockdev nvme passthru rw ...passed 00:19:53.700 Test: blockdev nvme passthru vendor specific ...[2024-10-17 16:34:29.854092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:19:53.700 Test: blockdev nvme admin passthru ...RP2 0x0 00:19:53.700 [2024-10-17 16:34:29.854235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:19:53.700 passed 00:19:53.700 Test: blockdev copy ...passed 00:19:53.700 Suite: bdevio tests on: Nvme2n3 00:19:53.700 Test: blockdev write read block ...passed 00:19:53.700 Test: blockdev write zeroes read block ...passed 00:19:53.700 Test: blockdev write zeroes read no split ...passed 00:19:53.700 Test: blockdev write zeroes read split ...passed 00:19:53.700 Test: blockdev write zeroes read split partial ...passed 00:19:53.700 Test: blockdev reset ...[2024-10-17 16:34:29.930717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:19:53.700 [2024-10-17 16:34:29.935712] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:53.700 passed 00:19:53.700 Test: blockdev write read 8 blocks ...passed 00:19:53.700 Test: blockdev write read size > 128k ...passed 00:19:53.700 Test: blockdev write read invalid size ...passed 00:19:53.700 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:53.700 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:53.700 Test: blockdev write read max offset ...passed 00:19:53.700 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:53.700 Test: blockdev writev readv 8 blocks ...passed 00:19:53.700 Test: blockdev writev readv 30 x 1block ...passed 00:19:53.700 Test: blockdev writev readv block ...passed 00:19:53.700 Test: blockdev writev readv size > 128k ...passed 00:19:53.700 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:53.700 Test: blockdev comparev and writev ...[2024-10-17 16:34:29.950787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb802000 len:0x1000 00:19:53.700 [2024-10-17 16:34:29.950838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:53.700 passed 00:19:53.700 Test: blockdev nvme passthru rw ...passed 00:19:53.700 Test: blockdev nvme passthru vendor specific ...[2024-10-17 16:34:29.952228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:19:53.700 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:19:53.700 [2024-10-17 16:34:29.952318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:19:53.700 passed 00:19:53.700 Test: blockdev copy ...passed 00:19:53.700 Suite: bdevio tests on: Nvme2n2 00:19:53.700 Test: blockdev write read block ...passed 00:19:53.700 Test: blockdev write zeroes read block ...passed 00:19:53.700 Test: blockdev write zeroes read no split ...passed 00:19:53.959 Test: blockdev write zeroes read split ...passed 00:19:53.959 Test: blockdev write zeroes read split partial ...passed 00:19:53.959 Test: blockdev reset ...[2024-10-17 16:34:30.048529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:19:53.959 [2024-10-17 16:34:30.053365] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:53.959 passed 00:19:53.959 Test: blockdev write read 8 blocks ...passed 00:19:53.959 Test: blockdev write read size > 128k ...passed 00:19:53.959 Test: blockdev write read invalid size ...passed 00:19:53.959 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:53.959 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:53.959 Test: blockdev write read max offset ...passed 00:19:53.959 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:53.959 Test: blockdev writev readv 8 blocks ...passed 00:19:53.959 Test: blockdev writev readv 30 x 1block ...passed 00:19:53.959 Test: blockdev writev readv block ...passed 00:19:53.959 Test: blockdev writev readv size > 128k ...passed 00:19:53.959 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:53.959 Test: blockdev comparev and writev ...[2024-10-17 16:34:30.069845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce638000 len:0x1000 00:19:53.959 [2024-10-17 16:34:30.069903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:53.959 passed 00:19:53.959 Test: blockdev nvme passthru rw ...passed 00:19:53.959 Test: blockdev nvme passthru vendor specific ...[2024-10-17 16:34:30.071203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:19:53.959 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:19:53.959 [2024-10-17 16:34:30.071349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:19:53.959 passed 00:19:53.959 Test: blockdev copy ...passed 00:19:53.959 Suite: bdevio tests on: Nvme2n1 00:19:53.959 Test: blockdev write read block ...passed 00:19:53.959 Test: blockdev write zeroes read block ...passed 00:19:53.959 Test: blockdev write zeroes read no split ...passed 00:19:53.959 Test: blockdev write zeroes read split ...passed 00:19:53.959 Test: blockdev write zeroes read split partial ...passed 00:19:53.959 Test: blockdev reset ...[2024-10-17 16:34:30.184475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:19:53.959 [2024-10-17 16:34:30.188711] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:53.959 passed 00:19:53.959 Test: blockdev write read 8 blocks ...passed 00:19:53.959 Test: blockdev write read size > 128k ...passed 00:19:53.959 Test: blockdev write read invalid size ...passed 00:19:53.959 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:53.959 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:53.959 Test: blockdev write read max offset ...passed 00:19:53.959 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:53.959 Test: blockdev writev readv 8 blocks ...passed 00:19:53.959 Test: blockdev writev readv 30 x 1block ...passed 00:19:53.959 Test: blockdev writev readv block ...passed 00:19:53.959 Test: blockdev writev readv size > 128k ...passed 00:19:53.959 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:53.959 Test: blockdev comparev and writev ...[2024-10-17 16:34:30.198212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce634000 len:0x1000 00:19:53.959 [2024-10-17 16:34:30.198267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:53.959 passed 00:19:53.959 Test: blockdev nvme passthru rw ...passed 00:19:53.959 Test: blockdev nvme passthru vendor specific ...[2024-10-17 16:34:30.199173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:19:53.959 Test: blockdev nvme admin passthru ...RP2 0x0 00:19:53.959 [2024-10-17 16:34:30.199311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:19:53.959 passed 00:19:53.959 Test: blockdev copy ...passed 00:19:53.959 Suite: bdevio tests on: Nvme1n1p2 00:19:53.959 Test: blockdev write read block ...passed 00:19:53.959 Test: blockdev write zeroes read block ...passed 00:19:53.959 Test: blockdev write zeroes read no split ...passed 00:19:53.959 Test: blockdev write zeroes read split ...passed 00:19:54.219 Test: blockdev write zeroes read split partial ...passed 00:19:54.219 Test: blockdev reset ...[2024-10-17 16:34:30.283179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:19:54.219 [2024-10-17 16:34:30.286943] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:54.219 passed 00:19:54.219 Test: blockdev write read 8 blocks ...passed 00:19:54.219 Test: blockdev write read size > 128k ...passed 00:19:54.219 Test: blockdev write read invalid size ...passed 00:19:54.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:54.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:54.219 Test: blockdev write read max offset ...passed 00:19:54.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:54.219 Test: blockdev writev readv 8 blocks ...passed 00:19:54.219 Test: blockdev writev readv 30 x 1block ...passed 00:19:54.219 Test: blockdev writev readv block ...passed 00:19:54.219 Test: blockdev writev readv size > 128k ...passed 00:19:54.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:54.219 Test: blockdev comparev and writev ...[2024-10-17 16:34:30.296378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2ce630000 len:0x1000 00:19:54.219 [2024-10-17 16:34:30.296549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:54.219 passed 00:19:54.219 Test: blockdev nvme passthru rw ...passed 00:19:54.219 Test: blockdev nvme passthru vendor specific ...passed 00:19:54.219 Test: blockdev nvme admin passthru ...passed 00:19:54.219 Test: blockdev copy ...passed 00:19:54.219 Suite: bdevio tests on: Nvme1n1p1 00:19:54.219 Test: blockdev write read block ...passed 00:19:54.219 Test: blockdev write zeroes read block ...passed 00:19:54.219 Test: blockdev write zeroes read no split ...passed 00:19:54.219 Test: blockdev write zeroes read split ...passed 00:19:54.219 Test: blockdev write zeroes read split partial ...passed 00:19:54.219 Test: blockdev reset ...[2024-10-17 16:34:30.363968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:19:54.219 [2024-10-17 16:34:30.367779] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:54.219 passed 00:19:54.219 Test: blockdev write read 8 blocks ...passed 00:19:54.219 Test: blockdev write read size > 128k ...passed 00:19:54.219 Test: blockdev write read invalid size ...passed 00:19:54.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:54.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:54.219 Test: blockdev write read max offset ...passed 00:19:54.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:54.219 Test: blockdev writev readv 8 blocks ...passed 00:19:54.219 Test: blockdev writev readv 30 x 1block ...passed 00:19:54.219 Test: blockdev writev readv block ...passed 00:19:54.219 Test: blockdev writev readv size > 128k ...passed 00:19:54.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:54.219 Test: blockdev comparev and writev ...[2024-10-17 16:34:30.376141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bc20e000 len:0x1000 00:19:54.219 [2024-10-17 16:34:30.376189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:19:54.219 passed 00:19:54.219 Test: blockdev nvme passthru rw ...passed 00:19:54.219 Test: blockdev nvme passthru vendor specific ...passed 00:19:54.219 Test: blockdev nvme admin passthru ...passed 00:19:54.219 Test: blockdev copy ...passed 00:19:54.219 Suite: bdevio tests on: Nvme0n1 00:19:54.219 Test: blockdev write read block ...passed 00:19:54.219 Test: blockdev write zeroes read block ...passed 00:19:54.219 Test: blockdev write zeroes read no split ...passed 00:19:54.219 Test: blockdev write zeroes read split ...passed 00:19:54.219 Test: blockdev write zeroes read split partial ...passed 00:19:54.219 Test: blockdev reset ...[2024-10-17 16:34:30.444325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:19:54.219 [2024-10-17 16:34:30.448226] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:54.219 passed 00:19:54.219 Test: blockdev write read 8 blocks ...passed 00:19:54.219 Test: blockdev write read size > 128k ...passed 00:19:54.219 Test: blockdev write read invalid size ...passed 00:19:54.219 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:54.219 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:54.219 Test: blockdev write read max offset ...passed 00:19:54.219 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:54.219 Test: blockdev writev readv 8 blocks ...passed 00:19:54.219 Test: blockdev writev readv 30 x 1block ...passed 00:19:54.219 Test: blockdev writev readv block ...passed 00:19:54.219 Test: blockdev writev readv size > 128k ...passed 00:19:54.219 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:54.219 Test: blockdev comparev and writev ...passed 00:19:54.219 Test: blockdev nvme passthru rw ...[2024-10-17 16:34:30.455684] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:19:54.219 separate metadata which is not supported yet. 00:19:54.219 passed 00:19:54.219 Test: blockdev nvme passthru vendor specific ...passed 00:19:54.219 Test: blockdev nvme admin passthru ...[2024-10-17 16:34:30.456308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:19:54.219 [2024-10-17 16:34:30.456354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:19:54.219 passed 00:19:54.219 Test: blockdev copy ...passed 00:19:54.219 00:19:54.219 Run Summary: Type Total Ran Passed Failed Inactive 00:19:54.219 suites 7 7 n/a 0 0 00:19:54.219 tests 161 161 161 0 0 00:19:54.219 asserts 1025 1025 1025 0 n/a 00:19:54.219 00:19:54.219 Elapsed time = 1.925 seconds 00:19:54.219 0 00:19:54.219 16:34:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62570 00:19:54.219 16:34:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 62570 ']' 00:19:54.219 16:34:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 62570 00:19:54.219 16:34:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:19:54.219 16:34:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:54.219 16:34:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62570 00:19:54.479 killing process with pid 62570 00:19:54.479 16:34:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:54.479 16:34:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:54.479 16:34:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62570' 00:19:54.479 16:34:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 62570 00:19:54.479 16:34:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 62570 00:19:55.431 ************************************ 00:19:55.431 END TEST bdev_bounds 00:19:55.431 ************************************ 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:55.431 00:19:55.431 real 0m3.014s 00:19:55.431 user 0m7.687s 00:19:55.431 sys 0m0.419s 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:55.431 16:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:19:55.431 16:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:55.431 16:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:55.431 16:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:19:55.431 ************************************ 00:19:55.431 START TEST bdev_nbd 00:19:55.431 ************************************ 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62635 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:55.431 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:55.432 16:34:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62635 /var/tmp/spdk-nbd.sock 00:19:55.432 16:34:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 62635 ']' 00:19:55.432 16:34:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:55.432 16:34:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:55.432 16:34:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:55.432 16:34:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.432 16:34:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:55.690 [2024-10-17 16:34:31.760244] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:19:55.690 [2024-10-17 16:34:31.760373] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.690 [2024-10-17 16:34:31.926917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.948 [2024-10-17 16:34:32.049500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:19:56.553 16:34:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.812 1+0 records in 00:19:56.812 1+0 records out 00:19:56.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00198477 s, 2.1 MB/s 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:19:56.812 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.070 1+0 records in 00:19:57.070 1+0 records out 00:19:57.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000572821 s, 7.2 MB/s 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:19:57.070 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.330 1+0 records in 00:19:57.330 1+0 records out 00:19:57.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619087 s, 6.6 MB/s 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:57.330 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:57.331 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:57.331 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:19:57.331 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.592 1+0 records in 00:19:57.592 1+0 records out 00:19:57.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662333 s, 6.2 MB/s 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:19:57.592 16:34:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.852 1+0 records in 00:19:57.852 1+0 records out 00:19:57.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412721 s, 9.9 MB/s 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:19:57.852 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:19:58.110 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.111 1+0 records in 00:19:58.111 1+0 records out 00:19:58.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000894895 s, 4.6 MB/s 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:19:58.111 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.369 1+0 records in 00:19:58.369 1+0 records out 00:19:58.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562532 s, 7.3 MB/s 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:19:58.369 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:58.629 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:58.629 { 00:19:58.629 "nbd_device": "/dev/nbd0", 00:19:58.629 "bdev_name": "Nvme0n1" 00:19:58.629 }, 00:19:58.629 { 00:19:58.629 "nbd_device": "/dev/nbd1", 00:19:58.629 "bdev_name": "Nvme1n1p1" 00:19:58.629 }, 00:19:58.629 { 00:19:58.629 "nbd_device": "/dev/nbd2", 00:19:58.629 "bdev_name": "Nvme1n1p2" 00:19:58.629 }, 00:19:58.629 { 00:19:58.629 "nbd_device": "/dev/nbd3", 00:19:58.629 "bdev_name": "Nvme2n1" 00:19:58.629 }, 00:19:58.629 { 00:19:58.629 "nbd_device": "/dev/nbd4", 00:19:58.629 "bdev_name": "Nvme2n2" 00:19:58.629 }, 00:19:58.629 { 00:19:58.629 "nbd_device": "/dev/nbd5", 00:19:58.629 "bdev_name": "Nvme2n3" 00:19:58.629 }, 00:19:58.629 { 00:19:58.629 "nbd_device": "/dev/nbd6", 00:19:58.629 "bdev_name": "Nvme3n1" 00:19:58.629 } 00:19:58.629 ]' 00:19:58.629 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:58.629 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:58.629 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:58.629 { 00:19:58.629 "nbd_device": "/dev/nbd0", 00:19:58.629 "bdev_name": "Nvme0n1" 00:19:58.629 }, 00:19:58.629 { 00:19:58.629 "nbd_device": "/dev/nbd1", 00:19:58.629 "bdev_name": "Nvme1n1p1" 00:19:58.629 }, 00:19:58.629 { 00:19:58.629 "nbd_device": "/dev/nbd2", 00:19:58.629 "bdev_name": "Nvme1n1p2" 00:19:58.629 }, 00:19:58.629 { 00:19:58.629 "nbd_device": "/dev/nbd3", 00:19:58.629 "bdev_name": "Nvme2n1" 00:19:58.629 }, 00:19:58.629 { 00:19:58.629 "nbd_device": "/dev/nbd4", 00:19:58.629 "bdev_name": "Nvme2n2" 00:19:58.629 }, 00:19:58.629 { 00:19:58.629 "nbd_device": "/dev/nbd5", 00:19:58.629 "bdev_name": "Nvme2n3" 00:19:58.629 }, 00:19:58.629 { 00:19:58.629 "nbd_device": "/dev/nbd6", 00:19:58.629 "bdev_name": "Nvme3n1" 00:19:58.629 } 00:19:58.629 ]' 00:19:58.629 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:19:58.629 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:58.629 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:19:58.629 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:58.629 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:58.629 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.629 16:34:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:58.888 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:58.888 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:58.888 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:58.888 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:58.888 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:58.888 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:58.888 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:58.888 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:58.888 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.889 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:59.197 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:59.197 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:59.197 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:59.197 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.197 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.197 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:59.197 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:59.197 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.197 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.197 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:59.457 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:59.457 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:59.457 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:59.457 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.457 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.457 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:59.457 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:59.457 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.457 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.457 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:59.717 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:59.717 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:59.717 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:59.717 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.717 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.717 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:59.717 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:59.717 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.717 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.717 16:34:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:59.976 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:59.976 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:59.976 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:59.976 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.976 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.977 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:20:00.235 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:20:00.235 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:20:00.235 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:20:00.235 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:00.235 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:00.235 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:20:00.235 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:00.235 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:00.235 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:00.235 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:00.235 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:00.493 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:00.493 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:00.493 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:00.752 16:34:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:20:01.010 /dev/nbd0 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:01.010 1+0 records in 00:20:01.010 1+0 records out 00:20:01.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052354 s, 7.8 MB/s 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:01.010 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:20:01.268 /dev/nbd1 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:01.268 1+0 records in 00:20:01.268 1+0 records out 00:20:01.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633323 s, 6.5 MB/s 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:01.268 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:20:01.526 /dev/nbd10 00:20:01.526 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:20:01.526 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:20:01.526 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:20:01.526 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:01.526 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:01.526 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:01.527 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:20:01.527 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:01.527 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:01.527 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:01.527 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:01.527 1+0 records in 00:20:01.527 1+0 records out 00:20:01.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000867345 s, 4.7 MB/s 00:20:01.527 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.527 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:01.527 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.527 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:01.527 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:01.527 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:01.527 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:01.527 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:20:01.786 /dev/nbd11 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:01.786 1+0 records in 00:20:01.786 1+0 records out 00:20:01.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000908519 s, 4.5 MB/s 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:01.786 16:34:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:20:02.044 /dev/nbd12 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:02.044 1+0 records in 00:20:02.044 1+0 records out 00:20:02.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638066 s, 6.4 MB/s 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:02.044 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:20:02.303 /dev/nbd13 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:02.303 1+0 records in 00:20:02.303 1+0 records out 00:20:02.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000822235 s, 5.0 MB/s 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:02.303 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:20:02.561 /dev/nbd14 00:20:02.561 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:20:02.561 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:20:02.561 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:20:02.561 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:02.561 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:02.561 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:02.561 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:20:02.819 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:02.819 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:02.819 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:02.819 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:02.819 1+0 records in 00:20:02.819 1+0 records out 00:20:02.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000994527 s, 4.1 MB/s 00:20:02.820 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.820 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:02.820 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.820 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:02.820 16:34:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:02.820 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:02.820 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:20:02.820 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:02.820 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:02.820 16:34:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:02.820 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:02.820 { 00:20:02.820 "nbd_device": "/dev/nbd0", 00:20:02.820 "bdev_name": "Nvme0n1" 00:20:02.820 }, 00:20:02.820 { 00:20:02.820 "nbd_device": "/dev/nbd1", 00:20:02.820 "bdev_name": "Nvme1n1p1" 00:20:02.820 }, 00:20:02.820 { 00:20:02.820 "nbd_device": "/dev/nbd10", 00:20:02.820 "bdev_name": "Nvme1n1p2" 00:20:02.820 }, 00:20:02.820 { 00:20:02.820 "nbd_device": "/dev/nbd11", 00:20:02.820 "bdev_name": "Nvme2n1" 00:20:02.820 }, 00:20:02.820 { 00:20:02.820 "nbd_device": "/dev/nbd12", 00:20:02.820 "bdev_name": "Nvme2n2" 00:20:02.820 }, 00:20:02.820 { 00:20:02.820 "nbd_device": "/dev/nbd13", 00:20:02.820 "bdev_name": "Nvme2n3" 00:20:02.820 }, 00:20:02.820 { 00:20:02.820 "nbd_device": "/dev/nbd14", 00:20:02.820 "bdev_name": "Nvme3n1" 00:20:02.820 } 00:20:02.820 ]' 00:20:02.820 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:02.820 { 00:20:02.820 "nbd_device": "/dev/nbd0", 00:20:02.820 "bdev_name": "Nvme0n1" 00:20:02.820 }, 00:20:02.820 { 00:20:02.820 "nbd_device": "/dev/nbd1", 00:20:02.820 "bdev_name": "Nvme1n1p1" 00:20:02.820 }, 00:20:02.820 { 00:20:02.820 "nbd_device": "/dev/nbd10", 00:20:02.820 "bdev_name": "Nvme1n1p2" 00:20:02.820 }, 00:20:02.820 { 00:20:02.820 "nbd_device": "/dev/nbd11", 00:20:02.820 "bdev_name": "Nvme2n1" 00:20:02.820 }, 00:20:02.820 { 00:20:02.820 "nbd_device": "/dev/nbd12", 00:20:02.820 "bdev_name": "Nvme2n2" 00:20:02.820 }, 00:20:02.820 { 00:20:02.820 "nbd_device": "/dev/nbd13", 00:20:02.820 "bdev_name": "Nvme2n3" 00:20:02.820 }, 00:20:02.820 { 00:20:02.820 "nbd_device": "/dev/nbd14", 00:20:02.820 "bdev_name": "Nvme3n1" 00:20:02.820 } 00:20:02.820 ]' 00:20:02.820 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:20:03.078 /dev/nbd1 00:20:03.078 /dev/nbd10 00:20:03.078 /dev/nbd11 00:20:03.078 /dev/nbd12 00:20:03.078 /dev/nbd13 00:20:03.078 /dev/nbd14' 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:20:03.078 /dev/nbd1 00:20:03.078 /dev/nbd10 00:20:03.078 /dev/nbd11 00:20:03.078 /dev/nbd12 00:20:03.078 /dev/nbd13 00:20:03.078 /dev/nbd14' 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:03.078 256+0 records in 00:20:03.078 256+0 records out 00:20:03.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00865947 s, 121 MB/s 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:03.078 256+0 records in 00:20:03.078 256+0 records out 00:20:03.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157298 s, 6.7 MB/s 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:03.078 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:20:03.336 256+0 records in 00:20:03.336 256+0 records out 00:20:03.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136131 s, 7.7 MB/s 00:20:03.336 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:03.336 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:20:03.336 256+0 records in 00:20:03.336 256+0 records out 00:20:03.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1374 s, 7.6 MB/s 00:20:03.336 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:03.336 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:20:03.594 256+0 records in 00:20:03.594 256+0 records out 00:20:03.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139595 s, 7.5 MB/s 00:20:03.594 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:03.594 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:20:03.852 256+0 records in 00:20:03.852 256+0 records out 00:20:03.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136128 s, 7.7 MB/s 00:20:03.852 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:03.852 16:34:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:20:03.852 256+0 records in 00:20:03.852 256+0 records out 00:20:03.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134869 s, 7.8 MB/s 00:20:03.852 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:03.852 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:20:04.111 256+0 records in 00:20:04.111 256+0 records out 00:20:04.111 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138126 s, 7.6 MB/s 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.111 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:04.369 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:04.369 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:04.369 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:04.369 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.369 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.369 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:04.369 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:04.369 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.369 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.369 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:20:04.628 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:04.628 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:04.628 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:04.628 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.628 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.628 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:04.628 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:04.628 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.628 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.628 16:34:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:20:04.888 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:20:04.888 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:20:04.888 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:20:04.888 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.888 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.888 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:20:04.888 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:04.888 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.888 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.888 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:20:05.147 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:20:05.147 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:20:05.147 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:20:05.147 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:05.147 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:05.147 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:20:05.147 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:05.147 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:05.147 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:05.147 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:20:05.405 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:20:05.405 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:20:05.405 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:20:05.405 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:05.405 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:05.405 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:20:05.405 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:05.405 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:05.405 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:05.405 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:20:05.665 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:20:05.665 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:20:05.665 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:20:05.665 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:05.665 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:05.665 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:20:05.665 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:05.665 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:05.665 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:05.665 16:34:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:20:05.924 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:20:05.924 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:20:05.924 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:20:05.924 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:05.924 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:05.924 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:20:05.924 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:05.924 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:05.924 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:05.924 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:05.924 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:06.182 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:06.440 malloc_lvol_verify 00:20:06.440 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:06.698 9a715f53-d1b0-4d3b-859f-4c9b9bf8789d 00:20:06.698 16:34:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:06.956 75ace74b-be32-425e-97ce-e180449f4d1a 00:20:06.956 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:07.213 /dev/nbd0 00:20:07.213 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:07.213 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:07.213 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:07.213 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:07.213 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:07.213 mke2fs 1.47.0 (5-Feb-2023) 00:20:07.213 Discarding device blocks: 0/4096 done 00:20:07.213 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:07.213 00:20:07.213 Allocating group tables: 0/1 done 00:20:07.214 Writing inode tables: 0/1 done 00:20:07.214 Creating journal (1024 blocks): done 00:20:07.214 Writing superblocks and filesystem accounting information: 0/1 done 00:20:07.214 00:20:07.214 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:07.214 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:07.214 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:07.214 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:07.214 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:07.214 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:07.214 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:07.472 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:07.472 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:07.472 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:07.472 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:07.472 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:07.472 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:07.472 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:07.472 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:07.472 16:34:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62635 00:20:07.472 16:34:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 62635 ']' 00:20:07.472 16:34:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 62635 00:20:07.472 16:34:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:20:07.472 16:34:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:07.472 16:34:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62635 00:20:07.730 16:34:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:07.730 16:34:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:07.730 killing process with pid 62635 00:20:07.730 16:34:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62635' 00:20:07.730 16:34:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 62635 00:20:07.730 16:34:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 62635 00:20:09.104 16:34:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:09.104 00:20:09.104 real 0m13.457s 00:20:09.104 user 0m17.709s 00:20:09.104 sys 0m5.562s 00:20:09.104 ************************************ 00:20:09.104 END TEST bdev_nbd 00:20:09.104 ************************************ 00:20:09.104 16:34:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:09.104 16:34:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:09.104 16:34:45 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:09.104 16:34:45 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:20:09.104 16:34:45 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:20:09.104 skipping fio tests on NVMe due to multi-ns failures. 00:20:09.104 16:34:45 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:20:09.104 16:34:45 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:09.104 16:34:45 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:09.104 16:34:45 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:20:09.104 16:34:45 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:09.104 16:34:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:09.104 ************************************ 00:20:09.104 START TEST bdev_verify 00:20:09.104 ************************************ 00:20:09.104 16:34:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:09.104 [2024-10-17 16:34:45.268654] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:20:09.104 [2024-10-17 16:34:45.268832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63079 ] 00:20:09.363 [2024-10-17 16:34:45.445019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:09.363 [2024-10-17 16:34:45.573685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.363 [2024-10-17 16:34:45.573718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.337 Running I/O for 5 seconds... 00:20:12.647 22080.00 IOPS, 86.25 MiB/s [2024-10-17T16:34:49.883Z] 22016.00 IOPS, 86.00 MiB/s [2024-10-17T16:34:50.820Z] 21056.00 IOPS, 82.25 MiB/s [2024-10-17T16:34:51.829Z] 20880.00 IOPS, 81.56 MiB/s [2024-10-17T16:34:51.829Z] 20915.20 IOPS, 81.70 MiB/s 00:20:15.530 Latency(us) 00:20:15.530 [2024-10-17T16:34:51.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.530 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:15.530 Verification LBA range: start 0x0 length 0xbd0bd 00:20:15.530 Nvme0n1 : 5.07 1476.20 5.77 0.00 0.00 86356.10 10738.43 77485.13 00:20:15.530 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.530 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:20:15.530 Nvme0n1 : 5.07 1463.08 5.72 0.00 0.00 87303.40 18213.22 74958.44 00:20:15.530 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:15.530 Verification LBA range: start 0x0 length 0x4ff80 00:20:15.530 Nvme1n1p1 : 5.07 1475.71 5.76 0.00 0.00 86227.46 8580.22 70747.30 00:20:15.530 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.530 Verification LBA range: start 0x4ff80 length 0x4ff80 00:20:15.530 Nvme1n1p1 : 5.08 1462.27 5.71 0.00 0.00 87187.67 19792.40 76221.79 00:20:15.530 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:15.530 Verification LBA range: start 0x0 length 0x4ff7f 00:20:15.530 Nvme1n1p2 : 5.08 1474.89 5.76 0.00 0.00 86014.47 9738.28 70326.18 00:20:15.530 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.530 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:20:15.530 Nvme1n1p2 : 5.08 1461.48 5.71 0.00 0.00 86978.65 20108.23 76642.90 00:20:15.530 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:15.530 Verification LBA range: start 0x0 length 0x80000 00:20:15.530 Nvme2n1 : 5.08 1474.11 5.76 0.00 0.00 85901.16 11159.54 68220.61 00:20:15.530 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.530 Verification LBA range: start 0x80000 length 0x80000 00:20:15.530 Nvme2n1 : 5.08 1460.78 5.71 0.00 0.00 86900.43 21476.86 74537.33 00:20:15.530 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:15.530 Verification LBA range: start 0x0 length 0x80000 00:20:15.530 Nvme2n2 : 5.09 1483.03 5.79 0.00 0.00 85482.13 9211.89 66957.26 00:20:15.530 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.530 Verification LBA range: start 0x80000 length 0x80000 00:20:15.530 Nvme2n2 : 5.08 1460.37 5.70 0.00 0.00 86801.10 20845.19 72431.76 00:20:15.530 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:15.530 Verification LBA range: start 0x0 length 0x80000 00:20:15.530 Nvme2n3 : 5.09 1482.69 5.79 0.00 0.00 85376.40 9106.61 69483.95 00:20:15.530 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.530 Verification LBA range: start 0x80000 length 0x80000 00:20:15.530 Nvme2n3 : 5.09 1459.96 5.70 0.00 0.00 86699.84 18423.78 73695.10 00:20:15.530 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:15.530 Verification LBA range: start 0x0 length 0x20000 00:20:15.530 Nvme3n1 : 5.09 1482.36 5.79 0.00 0.00 85272.78 8948.69 72010.64 00:20:15.531 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:15.531 Verification LBA range: start 0x20000 length 0x20000 00:20:15.531 Nvme3n1 : 5.09 1459.60 5.70 0.00 0.00 86598.34 13580.95 74116.22 00:20:15.531 [2024-10-17T16:34:51.830Z] =================================================================================================================== 00:20:15.531 [2024-10-17T16:34:51.830Z] Total : 20576.53 80.38 0.00 0.00 86360.07 8580.22 77485.13 00:20:16.911 00:20:16.911 real 0m7.694s 00:20:16.911 user 0m14.188s 00:20:16.911 sys 0m0.327s 00:20:16.911 16:34:52 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:16.911 16:34:52 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:16.911 ************************************ 00:20:16.911 END TEST bdev_verify 00:20:16.911 ************************************ 00:20:16.911 16:34:52 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:16.911 16:34:52 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:20:16.911 16:34:52 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:16.911 16:34:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:16.911 ************************************ 00:20:16.911 START TEST bdev_verify_big_io 00:20:16.911 ************************************ 00:20:16.911 16:34:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:16.911 [2024-10-17 16:34:53.015031] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:20:16.911 [2024-10-17 16:34:53.015155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63177 ] 00:20:16.911 [2024-10-17 16:34:53.187006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:17.172 [2024-10-17 16:34:53.311439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.172 [2024-10-17 16:34:53.311454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.106 Running I/O for 5 seconds... 00:20:22.022 2253.00 IOPS, 140.81 MiB/s [2024-10-17T16:34:58.890Z] 2787.50 IOPS, 174.22 MiB/s [2024-10-17T16:35:00.268Z] 2470.00 IOPS, 154.38 MiB/s [2024-10-17T16:35:00.268Z] 2826.25 IOPS, 176.64 MiB/s 00:20:23.969 Latency(us) 00:20:23.969 [2024-10-17T16:35:00.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.969 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:23.969 Verification LBA range: start 0x0 length 0xbd0b 00:20:23.969 Nvme0n1 : 5.69 144.86 9.05 0.00 0.00 854064.12 13896.79 1259975.66 00:20:23.969 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:23.969 Verification LBA range: start 0xbd0b length 0xbd0b 00:20:23.969 Nvme0n1 : 5.62 150.99 9.44 0.00 0.00 824972.82 30109.71 869181.07 00:20:23.969 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:23.969 Verification LBA range: start 0x0 length 0x4ff8 00:20:23.969 Nvme1n1p1 : 5.69 143.87 8.99 0.00 0.00 835372.21 24635.22 1273451.33 00:20:23.969 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:23.969 Verification LBA range: start 0x4ff8 length 0x4ff8 00:20:23.969 Nvme1n1p1 : 5.69 118.09 7.38 0.00 0.00 1024465.61 77485.13 1489062.14 00:20:23.969 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:23.969 Verification LBA range: start 0x0 length 0x4ff7 00:20:23.969 Nvme1n1p2 : 5.77 147.17 9.20 0.00 0.00 797676.61 39374.24 1286927.01 00:20:23.969 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:23.969 Verification LBA range: start 0x4ff7 length 0x4ff7 00:20:23.969 Nvme1n1p2 : 5.69 133.71 8.36 0.00 0.00 880611.77 85486.32 1677721.60 00:20:23.969 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:23.969 Verification LBA range: start 0x0 length 0x8000 00:20:23.969 Nvme2n1 : 5.71 148.61 9.29 0.00 0.00 780056.06 53060.47 1307140.52 00:20:23.969 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:23.969 Verification LBA range: start 0x8000 length 0x8000 00:20:23.969 Nvme2n1 : 5.69 156.49 9.78 0.00 0.00 747866.67 68220.61 825385.12 00:20:23.969 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:23.969 Verification LBA range: start 0x0 length 0x8000 00:20:23.969 Nvme2n2 : 5.77 150.73 9.42 0.00 0.00 749393.50 61903.88 1327354.04 00:20:23.969 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:23.969 Verification LBA range: start 0x8000 length 0x8000 00:20:23.969 Nvme2n2 : 5.71 161.26 10.08 0.00 0.00 712536.80 13580.95 808540.53 00:20:23.969 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:23.969 Verification LBA range: start 0x0 length 0x8000 00:20:23.969 Nvme2n3 : 5.82 163.50 10.22 0.00 0.00 680170.83 15791.81 1347567.55 00:20:23.969 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:23.969 Verification LBA range: start 0x8000 length 0x8000 00:20:23.969 Nvme2n3 : 5.76 166.76 10.42 0.00 0.00 674113.32 36215.88 822016.21 00:20:23.969 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:23.969 Verification LBA range: start 0x0 length 0x2000 00:20:23.969 Nvme3n1 : 5.84 179.05 11.19 0.00 0.00 609457.40 7474.79 1367781.06 00:20:23.969 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:23.969 Verification LBA range: start 0x2000 length 0x2000 00:20:23.969 Nvme3n1 : 5.80 182.45 11.40 0.00 0.00 604733.23 5448.17 842229.72 00:20:23.969 [2024-10-17T16:35:00.268Z] =================================================================================================================== 00:20:23.969 [2024-10-17T16:35:00.268Z] Total : 2147.53 134.22 0.00 0.00 757475.17 5448.17 1677721.60 00:20:26.522 00:20:26.522 real 0m9.770s 00:20:26.522 user 0m18.339s 00:20:26.522 sys 0m0.344s 00:20:26.522 16:35:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:26.522 16:35:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:26.522 ************************************ 00:20:26.522 END TEST bdev_verify_big_io 00:20:26.522 ************************************ 00:20:26.522 16:35:02 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:26.522 16:35:02 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:26.522 16:35:02 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:26.522 16:35:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:26.522 ************************************ 00:20:26.522 START TEST bdev_write_zeroes 00:20:26.522 ************************************ 00:20:26.522 16:35:02 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:26.781 [2024-10-17 16:35:02.862682] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:20:26.781 [2024-10-17 16:35:02.862823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63297 ] 00:20:26.781 [2024-10-17 16:35:03.034884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.040 [2024-10-17 16:35:03.155562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.609 Running I/O for 1 seconds... 00:20:29.019 60852.00 IOPS, 237.70 MiB/s 00:20:29.019 Latency(us) 00:20:29.019 [2024-10-17T16:35:05.318Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.019 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.019 Nvme0n1 : 1.02 8537.95 33.35 0.00 0.00 14961.22 5632.41 70326.18 00:20:29.019 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.019 Nvme1n1p1 : 1.02 8783.28 34.31 0.00 0.00 14523.03 11001.63 53692.14 00:20:29.019 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.019 Nvme1n1p2 : 1.02 8684.75 33.92 0.00 0.00 14636.47 10791.07 62325.00 00:20:29.019 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.019 Nvme2n1 : 1.03 8676.87 33.89 0.00 0.00 14570.88 10738.43 57271.62 00:20:29.019 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.019 Nvme2n2 : 1.03 8668.20 33.86 0.00 0.00 14565.86 10843.71 57271.62 00:20:29.019 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.019 Nvme2n3 : 1.03 8660.02 33.83 0.00 0.00 14532.76 10212.04 55166.05 00:20:29.019 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:29.019 Nvme3n1 : 1.03 8652.30 33.80 0.00 0.00 14493.60 8843.41 53692.14 00:20:29.019 [2024-10-17T16:35:05.318Z] =================================================================================================================== 00:20:29.019 [2024-10-17T16:35:05.318Z] Total : 60663.38 236.97 0.00 0.00 14610.90 5632.41 70326.18 00:20:29.955 00:20:29.955 real 0m3.307s 00:20:29.955 user 0m2.910s 00:20:29.955 sys 0m0.282s 00:20:29.955 16:35:06 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:29.955 16:35:06 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:29.955 ************************************ 00:20:29.955 END TEST bdev_write_zeroes 00:20:29.955 ************************************ 00:20:29.955 16:35:06 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:29.955 16:35:06 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:29.955 16:35:06 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:29.955 16:35:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:29.955 ************************************ 00:20:29.955 START TEST bdev_json_nonenclosed 00:20:29.955 ************************************ 00:20:29.955 16:35:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:30.213 [2024-10-17 16:35:06.253839] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:20:30.213 [2024-10-17 16:35:06.253972] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63350 ] 00:20:30.213 [2024-10-17 16:35:06.428134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.471 [2024-10-17 16:35:06.559403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.471 [2024-10-17 16:35:06.559506] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:30.471 [2024-10-17 16:35:06.559527] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:30.471 [2024-10-17 16:35:06.559551] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:30.730 00:20:30.730 real 0m0.673s 00:20:30.730 user 0m0.430s 00:20:30.730 sys 0m0.137s 00:20:30.730 16:35:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.730 ************************************ 00:20:30.730 END TEST bdev_json_nonenclosed 00:20:30.730 ************************************ 00:20:30.730 16:35:06 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:30.730 16:35:06 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:30.730 16:35:06 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:30.730 16:35:06 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.730 16:35:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:30.730 ************************************ 00:20:30.730 START TEST bdev_json_nonarray 00:20:30.730 ************************************ 00:20:30.730 16:35:06 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:30.730 [2024-10-17 16:35:06.990540] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:20:30.730 [2024-10-17 16:35:06.990677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63376 ] 00:20:30.988 [2024-10-17 16:35:07.163723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.988 [2024-10-17 16:35:07.282206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.988 [2024-10-17 16:35:07.282317] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:30.989 [2024-10-17 16:35:07.282337] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:30.989 [2024-10-17 16:35:07.282349] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:31.555 00:20:31.555 real 0m0.654s 00:20:31.555 user 0m0.384s 00:20:31.555 sys 0m0.165s 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:31.555 ************************************ 00:20:31.555 END TEST bdev_json_nonarray 00:20:31.555 ************************************ 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:31.555 16:35:07 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:20:31.555 16:35:07 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:20:31.555 16:35:07 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:20:31.555 16:35:07 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:31.555 16:35:07 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:31.555 16:35:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:31.555 ************************************ 00:20:31.555 START TEST bdev_gpt_uuid 00:20:31.555 ************************************ 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63401 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63401 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 63401 ']' 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:31.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:31.555 16:35:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:31.555 [2024-10-17 16:35:07.733636] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:20:31.555 [2024-10-17 16:35:07.733773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63401 ] 00:20:31.814 [2024-10-17 16:35:07.907053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.814 [2024-10-17 16:35:08.025849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.750 16:35:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:32.750 16:35:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:20:32.750 16:35:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:32.750 16:35:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.750 16:35:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:33.009 Some configs were skipped because the RPC state that can call them passed over. 00:20:33.009 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.009 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:20:33.009 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.009 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:33.009 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.009 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:20:33.009 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.009 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:33.009 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.009 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:20:33.009 { 00:20:33.009 "name": "Nvme1n1p1", 00:20:33.009 "aliases": [ 00:20:33.009 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:20:33.009 ], 00:20:33.009 "product_name": "GPT Disk", 00:20:33.009 "block_size": 4096, 00:20:33.009 "num_blocks": 655104, 00:20:33.009 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:20:33.009 "assigned_rate_limits": { 00:20:33.009 "rw_ios_per_sec": 0, 00:20:33.009 "rw_mbytes_per_sec": 0, 00:20:33.009 "r_mbytes_per_sec": 0, 00:20:33.009 "w_mbytes_per_sec": 0 00:20:33.009 }, 00:20:33.009 "claimed": false, 00:20:33.009 "zoned": false, 00:20:33.009 "supported_io_types": { 00:20:33.009 "read": true, 00:20:33.009 "write": true, 00:20:33.009 "unmap": true, 00:20:33.009 "flush": true, 00:20:33.009 "reset": true, 00:20:33.009 "nvme_admin": false, 00:20:33.009 "nvme_io": false, 00:20:33.009 "nvme_io_md": false, 00:20:33.009 "write_zeroes": true, 00:20:33.009 "zcopy": false, 00:20:33.009 "get_zone_info": false, 00:20:33.009 "zone_management": false, 00:20:33.009 "zone_append": false, 00:20:33.009 "compare": true, 00:20:33.009 "compare_and_write": false, 00:20:33.009 "abort": true, 00:20:33.009 "seek_hole": false, 00:20:33.009 "seek_data": false, 00:20:33.009 "copy": true, 00:20:33.009 "nvme_iov_md": false 00:20:33.009 }, 00:20:33.009 "driver_specific": { 00:20:33.009 "gpt": { 00:20:33.009 "base_bdev": "Nvme1n1", 00:20:33.009 "offset_blocks": 256, 00:20:33.009 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:20:33.009 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:20:33.009 "partition_name": "SPDK_TEST_first" 00:20:33.009 } 00:20:33.009 } 00:20:33.009 } 00:20:33.009 ]' 00:20:33.009 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:20:33.267 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:20:33.267 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:20:33.267 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:20:33.267 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:20:33.267 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:20:33.267 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:20:33.267 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.267 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:33.267 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.267 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:20:33.267 { 00:20:33.267 "name": "Nvme1n1p2", 00:20:33.267 "aliases": [ 00:20:33.267 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:20:33.267 ], 00:20:33.267 "product_name": "GPT Disk", 00:20:33.267 "block_size": 4096, 00:20:33.267 "num_blocks": 655103, 00:20:33.267 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:20:33.267 "assigned_rate_limits": { 00:20:33.267 "rw_ios_per_sec": 0, 00:20:33.267 "rw_mbytes_per_sec": 0, 00:20:33.267 "r_mbytes_per_sec": 0, 00:20:33.267 "w_mbytes_per_sec": 0 00:20:33.267 }, 00:20:33.267 "claimed": false, 00:20:33.267 "zoned": false, 00:20:33.267 "supported_io_types": { 00:20:33.267 "read": true, 00:20:33.267 "write": true, 00:20:33.267 "unmap": true, 00:20:33.267 "flush": true, 00:20:33.267 "reset": true, 00:20:33.267 "nvme_admin": false, 00:20:33.267 "nvme_io": false, 00:20:33.267 "nvme_io_md": false, 00:20:33.267 "write_zeroes": true, 00:20:33.267 "zcopy": false, 00:20:33.267 "get_zone_info": false, 00:20:33.267 "zone_management": false, 00:20:33.267 "zone_append": false, 00:20:33.267 "compare": true, 00:20:33.267 "compare_and_write": false, 00:20:33.267 "abort": true, 00:20:33.267 "seek_hole": false, 00:20:33.267 "seek_data": false, 00:20:33.267 "copy": true, 00:20:33.267 "nvme_iov_md": false 00:20:33.267 }, 00:20:33.267 "driver_specific": { 00:20:33.267 "gpt": { 00:20:33.267 "base_bdev": "Nvme1n1", 00:20:33.267 "offset_blocks": 655360, 00:20:33.267 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:20:33.267 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:20:33.267 "partition_name": "SPDK_TEST_second" 00:20:33.267 } 00:20:33.267 } 00:20:33.267 } 00:20:33.267 ]' 00:20:33.267 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:20:33.267 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:20:33.267 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:20:33.267 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:20:33.526 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:20:33.526 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:20:33.526 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63401 00:20:33.526 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 63401 ']' 00:20:33.526 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 63401 00:20:33.526 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:20:33.526 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.526 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63401 00:20:33.526 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:33.526 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:33.526 killing process with pid 63401 00:20:33.526 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63401' 00:20:33.526 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 63401 00:20:33.526 16:35:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 63401 00:20:36.059 00:20:36.059 real 0m4.468s 00:20:36.059 user 0m4.578s 00:20:36.059 sys 0m0.550s 00:20:36.059 16:35:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:36.059 16:35:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:20:36.059 ************************************ 00:20:36.059 END TEST bdev_gpt_uuid 00:20:36.059 ************************************ 00:20:36.059 16:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:20:36.059 16:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:36.059 16:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:20:36.059 16:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:36.059 16:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:36.059 16:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:20:36.059 16:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:20:36.059 16:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:20:36.059 16:35:12 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:36.626 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:36.885 Waiting for block devices as requested 00:20:36.885 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:36.885 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:37.142 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:37.142 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:42.430 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:42.430 16:35:18 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:20:42.430 16:35:18 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:20:42.689 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:20:42.689 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:20:42.689 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:20:42.689 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:20:42.689 16:35:18 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:20:42.689 00:20:42.689 real 1m7.457s 00:20:42.689 user 1m24.733s 00:20:42.689 sys 0m12.456s 00:20:42.689 ************************************ 00:20:42.689 END TEST blockdev_nvme_gpt 00:20:42.689 ************************************ 00:20:42.689 16:35:18 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:42.689 16:35:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:20:42.689 16:35:18 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:20:42.689 16:35:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:42.689 16:35:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:42.689 16:35:18 -- common/autotest_common.sh@10 -- # set +x 00:20:42.689 ************************************ 00:20:42.689 START TEST nvme 00:20:42.689 ************************************ 00:20:42.689 16:35:18 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:20:42.689 * Looking for test storage... 00:20:42.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:20:42.689 16:35:18 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:42.689 16:35:18 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:20:42.689 16:35:18 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:42.949 16:35:19 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:42.949 16:35:19 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.949 16:35:19 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.949 16:35:19 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.949 16:35:19 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.949 16:35:19 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.949 16:35:19 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.949 16:35:19 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.949 16:35:19 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.949 16:35:19 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.949 16:35:19 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.949 16:35:19 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.949 16:35:19 nvme -- scripts/common.sh@344 -- # case "$op" in 00:20:42.949 16:35:19 nvme -- scripts/common.sh@345 -- # : 1 00:20:42.949 16:35:19 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.949 16:35:19 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.949 16:35:19 nvme -- scripts/common.sh@365 -- # decimal 1 00:20:42.949 16:35:19 nvme -- scripts/common.sh@353 -- # local d=1 00:20:42.949 16:35:19 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.949 16:35:19 nvme -- scripts/common.sh@355 -- # echo 1 00:20:42.949 16:35:19 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.949 16:35:19 nvme -- scripts/common.sh@366 -- # decimal 2 00:20:42.949 16:35:19 nvme -- scripts/common.sh@353 -- # local d=2 00:20:42.949 16:35:19 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.949 16:35:19 nvme -- scripts/common.sh@355 -- # echo 2 00:20:42.949 16:35:19 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.949 16:35:19 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.949 16:35:19 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.949 16:35:19 nvme -- scripts/common.sh@368 -- # return 0 00:20:42.949 16:35:19 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.949 16:35:19 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.949 --rc genhtml_branch_coverage=1 00:20:42.949 --rc genhtml_function_coverage=1 00:20:42.949 --rc genhtml_legend=1 00:20:42.949 --rc geninfo_all_blocks=1 00:20:42.949 --rc geninfo_unexecuted_blocks=1 00:20:42.949 00:20:42.949 ' 00:20:42.949 16:35:19 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.949 --rc genhtml_branch_coverage=1 00:20:42.949 --rc genhtml_function_coverage=1 00:20:42.949 --rc genhtml_legend=1 00:20:42.949 --rc geninfo_all_blocks=1 00:20:42.949 --rc geninfo_unexecuted_blocks=1 00:20:42.949 00:20:42.949 ' 00:20:42.949 16:35:19 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.949 --rc genhtml_branch_coverage=1 00:20:42.949 --rc genhtml_function_coverage=1 00:20:42.949 --rc genhtml_legend=1 00:20:42.949 --rc geninfo_all_blocks=1 00:20:42.949 --rc geninfo_unexecuted_blocks=1 00:20:42.949 00:20:42.949 ' 00:20:42.949 16:35:19 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:42.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.949 --rc genhtml_branch_coverage=1 00:20:42.949 --rc genhtml_function_coverage=1 00:20:42.949 --rc genhtml_legend=1 00:20:42.949 --rc geninfo_all_blocks=1 00:20:42.949 --rc geninfo_unexecuted_blocks=1 00:20:42.949 00:20:42.949 ' 00:20:42.949 16:35:19 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:43.518 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:44.472 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:44.472 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:44.472 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:44.472 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:44.472 16:35:20 nvme -- nvme/nvme.sh@79 -- # uname 00:20:44.472 16:35:20 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:20:44.472 16:35:20 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:20:44.472 16:35:20 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:20:44.472 16:35:20 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:20:44.472 16:35:20 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:20:44.472 16:35:20 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:20:44.472 16:35:20 nvme -- common/autotest_common.sh@1071 -- # stubpid=64065 00:20:44.472 16:35:20 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:20:44.472 Waiting for stub to ready for secondary processes... 00:20:44.472 16:35:20 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:20:44.472 16:35:20 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:20:44.472 16:35:20 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64065 ]] 00:20:44.472 16:35:20 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:20:44.762 [2024-10-17 16:35:20.773532] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:20:44.762 [2024-10-17 16:35:20.773678] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:20:45.701 16:35:21 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:20:45.702 16:35:21 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/64065 ]] 00:20:45.702 16:35:21 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:20:45.702 [2024-10-17 16:35:21.797728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:45.702 [2024-10-17 16:35:21.908621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.702 [2024-10-17 16:35:21.908791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.702 [2024-10-17 16:35:21.908822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:45.702 [2024-10-17 16:35:21.926679] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:20:45.702 [2024-10-17 16:35:21.926743] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:20:45.702 [2024-10-17 16:35:21.943190] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:20:45.702 [2024-10-17 16:35:21.943347] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:20:45.702 [2024-10-17 16:35:21.946644] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:20:45.702 [2024-10-17 16:35:21.946894] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:20:45.702 [2024-10-17 16:35:21.946965] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:20:45.702 [2024-10-17 16:35:21.950196] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:20:45.702 [2024-10-17 16:35:21.950463] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:20:45.702 [2024-10-17 16:35:21.950565] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:20:45.702 [2024-10-17 16:35:21.954749] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:20:45.702 [2024-10-17 16:35:21.955053] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:20:45.702 [2024-10-17 16:35:21.955152] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:20:45.702 [2024-10-17 16:35:21.955228] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:20:45.702 [2024-10-17 16:35:21.955295] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:20:46.641 16:35:22 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:20:46.641 done. 00:20:46.641 16:35:22 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:20:46.642 16:35:22 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:20:46.642 16:35:22 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:20:46.642 16:35:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:46.642 16:35:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:46.642 ************************************ 00:20:46.642 START TEST nvme_reset 00:20:46.642 ************************************ 00:20:46.642 16:35:22 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:20:46.900 Initializing NVMe Controllers 00:20:46.900 Skipping QEMU NVMe SSD at 0000:00:10.0 00:20:46.900 Skipping QEMU NVMe SSD at 0000:00:11.0 00:20:46.900 Skipping QEMU NVMe SSD at 0000:00:13.0 00:20:46.900 Skipping QEMU NVMe SSD at 0000:00:12.0 00:20:46.900 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:20:46.900 ************************************ 00:20:46.900 END TEST nvme_reset 00:20:46.900 ************************************ 00:20:46.900 00:20:46.900 real 0m0.294s 00:20:46.900 user 0m0.091s 00:20:46.900 sys 0m0.159s 00:20:46.900 16:35:23 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:46.900 16:35:23 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:20:46.900 16:35:23 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:20:46.900 16:35:23 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:46.900 16:35:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:46.900 16:35:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:46.900 ************************************ 00:20:46.900 START TEST nvme_identify 00:20:46.900 ************************************ 00:20:46.900 16:35:23 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:20:46.900 16:35:23 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:20:46.900 16:35:23 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:20:46.900 16:35:23 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:20:46.900 16:35:23 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:20:46.900 16:35:23 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:20:46.900 16:35:23 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:20:46.900 16:35:23 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:46.900 16:35:23 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:20:46.900 16:35:23 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:46.901 16:35:23 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:20:46.901 16:35:23 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:46.901 16:35:23 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:20:47.163 [2024-10-17 16:35:23.433250] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 64099 terminated unexpected 00:20:47.163 ===================================================== 00:20:47.163 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:47.163 ===================================================== 00:20:47.163 Controller Capabilities/Features 00:20:47.163 ================================ 00:20:47.163 Vendor ID: 1b36 00:20:47.163 Subsystem Vendor ID: 1af4 00:20:47.163 Serial Number: 12340 00:20:47.163 Model Number: QEMU NVMe Ctrl 00:20:47.163 Firmware Version: 8.0.0 00:20:47.163 Recommended Arb Burst: 6 00:20:47.163 IEEE OUI Identifier: 00 54 52 00:20:47.163 Multi-path I/O 00:20:47.163 May have multiple subsystem ports: No 00:20:47.163 May have multiple controllers: No 00:20:47.163 Associated with SR-IOV VF: No 00:20:47.163 Max Data Transfer Size: 524288 00:20:47.163 Max Number of Namespaces: 256 00:20:47.163 Max Number of I/O Queues: 64 00:20:47.163 NVMe Specification Version (VS): 1.4 00:20:47.163 NVMe Specification Version (Identify): 1.4 00:20:47.163 Maximum Queue Entries: 2048 00:20:47.163 Contiguous Queues Required: Yes 00:20:47.163 Arbitration Mechanisms Supported 00:20:47.163 Weighted Round Robin: Not Supported 00:20:47.163 Vendor Specific: Not Supported 00:20:47.163 Reset Timeout: 7500 ms 00:20:47.163 Doorbell Stride: 4 bytes 00:20:47.163 NVM Subsystem Reset: Not Supported 00:20:47.163 Command Sets Supported 00:20:47.163 NVM Command Set: Supported 00:20:47.163 Boot Partition: Not Supported 00:20:47.163 Memory Page Size Minimum: 4096 bytes 00:20:47.163 Memory Page Size Maximum: 65536 bytes 00:20:47.163 Persistent Memory Region: Not Supported 00:20:47.163 Optional Asynchronous Events Supported 00:20:47.163 Namespace Attribute Notices: Supported 00:20:47.163 Firmware Activation Notices: Not Supported 00:20:47.163 ANA Change Notices: Not Supported 00:20:47.163 PLE Aggregate Log Change Notices: Not Supported 00:20:47.163 LBA Status Info Alert Notices: Not Supported 00:20:47.163 EGE Aggregate Log Change Notices: Not Supported 00:20:47.163 Normal NVM Subsystem Shutdown event: Not Supported 00:20:47.163 Zone Descriptor Change Notices: Not Supported 00:20:47.163 Discovery Log Change Notices: Not Supported 00:20:47.163 Controller Attributes 00:20:47.163 128-bit Host Identifier: Not Supported 00:20:47.163 Non-Operational Permissive Mode: Not Supported 00:20:47.163 NVM Sets: Not Supported 00:20:47.163 Read Recovery Levels: Not Supported 00:20:47.163 Endurance Groups: Not Supported 00:20:47.163 Predictable Latency Mode: Not Supported 00:20:47.163 Traffic Based Keep ALive: Not Supported 00:20:47.163 Namespace Granularity: Not Supported 00:20:47.163 SQ Associations: Not Supported 00:20:47.163 UUID List: Not Supported 00:20:47.163 Multi-Domain Subsystem: Not Supported 00:20:47.163 Fixed Capacity Management: Not Supported 00:20:47.163 Variable Capacity Management: Not Supported 00:20:47.163 Delete Endurance Group: Not Supported 00:20:47.163 Delete NVM Set: Not Supported 00:20:47.163 Extended LBA Formats Supported: Supported 00:20:47.163 Flexible Data Placement Supported: Not Supported 00:20:47.163 00:20:47.163 Controller Memory Buffer Support 00:20:47.163 ================================ 00:20:47.163 Supported: No 00:20:47.163 00:20:47.163 Persistent Memory Region Support 00:20:47.163 ================================ 00:20:47.163 Supported: No 00:20:47.163 00:20:47.163 Admin Command Set Attributes 00:20:47.163 ============================ 00:20:47.163 Security Send/Receive: Not Supported 00:20:47.163 Format NVM: Supported 00:20:47.164 Firmware Activate/Download: Not Supported 00:20:47.164 Namespace Management: Supported 00:20:47.164 Device Self-Test: Not Supported 00:20:47.164 Directives: Supported 00:20:47.164 NVMe-MI: Not Supported 00:20:47.164 Virtualization Management: Not Supported 00:20:47.164 Doorbell Buffer Config: Supported 00:20:47.164 Get LBA Status Capability: Not Supported 00:20:47.164 Command & Feature Lockdown Capability: Not Supported 00:20:47.164 Abort Command Limit: 4 00:20:47.164 Async Event Request Limit: 4 00:20:47.164 Number of Firmware Slots: N/A 00:20:47.164 Firmware Slot 1 Read-Only: N/A 00:20:47.164 Firmware Activation Without Reset: N/A 00:20:47.164 Multiple Update Detection Support: N/A 00:20:47.164 Firmware Update Granularity: No Information Provided 00:20:47.164 Per-Namespace SMART Log: Yes 00:20:47.164 Asymmetric Namespace Access Log Page: Not Supported 00:20:47.164 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:20:47.164 Command Effects Log Page: Supported 00:20:47.164 Get Log Page Extended Data: Supported 00:20:47.164 Telemetry Log Pages: Not Supported 00:20:47.164 Persistent Event Log Pages: Not Supported 00:20:47.164 Supported Log Pages Log Page: May Support 00:20:47.164 Commands Supported & Effects Log Page: Not Supported 00:20:47.164 Feature Identifiers & Effects Log Page:May Support 00:20:47.164 NVMe-MI Commands & Effects Log Page: May Support 00:20:47.164 Data Area 4 for Telemetry Log: Not Supported 00:20:47.164 Error Log Page Entries Supported: 1 00:20:47.164 Keep Alive: Not Supported 00:20:47.164 00:20:47.164 NVM Command Set Attributes 00:20:47.164 ========================== 00:20:47.164 Submission Queue Entry Size 00:20:47.164 Max: 64 00:20:47.164 Min: 64 00:20:47.164 Completion Queue Entry Size 00:20:47.164 Max: 16 00:20:47.164 Min: 16 00:20:47.164 Number of Namespaces: 256 00:20:47.164 Compare Command: Supported 00:20:47.164 Write Uncorrectable Command: Not Supported 00:20:47.164 Dataset Management Command: Supported 00:20:47.164 Write Zeroes Command: Supported 00:20:47.164 Set Features Save Field: Supported 00:20:47.164 Reservations: Not Supported 00:20:47.164 Timestamp: Supported 00:20:47.164 Copy: Supported 00:20:47.164 Volatile Write Cache: Present 00:20:47.164 Atomic Write Unit (Normal): 1 00:20:47.164 Atomic Write Unit (PFail): 1 00:20:47.164 Atomic Compare & Write Unit: 1 00:20:47.164 Fused Compare & Write: Not Supported 00:20:47.164 Scatter-Gather List 00:20:47.164 SGL Command Set: Supported 00:20:47.164 SGL Keyed: Not Supported 00:20:47.164 SGL Bit Bucket Descriptor: Not Supported 00:20:47.164 SGL Metadata Pointer: Not Supported 00:20:47.164 Oversized SGL: Not Supported 00:20:47.164 SGL Metadata Address: Not Supported 00:20:47.164 SGL Offset: Not Supported 00:20:47.164 Transport SGL Data Block: Not Supported 00:20:47.164 Replay Protected Memory Block: Not Supported 00:20:47.164 00:20:47.164 Firmware Slot Information 00:20:47.164 ========================= 00:20:47.164 Active slot: 1 00:20:47.164 Slot 1 Firmware Revision: 1.0 00:20:47.164 00:20:47.164 00:20:47.164 Commands Supported and Effects 00:20:47.164 ============================== 00:20:47.164 Admin Commands 00:20:47.164 -------------- 00:20:47.164 Delete I/O Submission Queue (00h): Supported 00:20:47.164 Create I/O Submission Queue (01h): Supported 00:20:47.164 Get Log Page (02h): Supported 00:20:47.164 Delete I/O Completion Queue (04h): Supported 00:20:47.164 Create I/O Completion Queue (05h): Supported 00:20:47.164 Identify (06h): Supported 00:20:47.164 Abort (08h): Supported 00:20:47.164 Set Features (09h): Supported 00:20:47.164 Get Features (0Ah): Supported 00:20:47.164 Asynchronous Event Request (0Ch): Supported 00:20:47.164 Namespace Attachment (15h): Supported NS-Inventory-Change 00:20:47.164 Directive Send (19h): Supported 00:20:47.164 Directive Receive (1Ah): Supported 00:20:47.164 Virtualization Management (1Ch): Supported 00:20:47.164 Doorbell Buffer Config (7Ch): Supported 00:20:47.164 Format NVM (80h): Supported LBA-Change 00:20:47.164 I/O Commands 00:20:47.164 ------------ 00:20:47.164 Flush (00h): Supported LBA-Change 00:20:47.164 Write (01h): Supported LBA-Change 00:20:47.164 Read (02h): Supported 00:20:47.164 Compare (05h): Supported 00:20:47.164 Write Zeroes (08h): Supported LBA-Change 00:20:47.164 Dataset Management (09h): Supported LBA-Change 00:20:47.164 Unknown (0Ch): Supported 00:20:47.164 Unknown (12h): Supported 00:20:47.164 Copy (19h): Supported LBA-Change 00:20:47.164 Unknown (1Dh): Supported LBA-Change 00:20:47.164 00:20:47.164 Error Log 00:20:47.164 ========= 00:20:47.164 00:20:47.164 Arbitration 00:20:47.164 =========== 00:20:47.164 Arbitration Burst: no limit 00:20:47.164 00:20:47.164 Power Management 00:20:47.164 ================ 00:20:47.164 Number of Power States: 1 00:20:47.164 Current Power State: Power State #0 00:20:47.164 Power State #0: 00:20:47.164 Max Power: 25.00 W 00:20:47.164 Non-Operational State: Operational 00:20:47.164 Entry Latency: 16 microseconds 00:20:47.164 Exit Latency: 4 microseconds 00:20:47.164 Relative Read Throughput: 0 00:20:47.164 Relative Read Latency: 0 00:20:47.164 Relative Write Throughput: 0 00:20:47.164 Relative Write Latency: 0 00:20:47.164 Idle Power[2024-10-17 16:35:23.434739] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 64099 terminated unexpected 00:20:47.164 : Not Reported 00:20:47.164 Active Power: Not Reported 00:20:47.164 Non-Operational Permissive Mode: Not Supported 00:20:47.164 00:20:47.164 Health Information 00:20:47.164 ================== 00:20:47.164 Critical Warnings: 00:20:47.164 Available Spare Space: OK 00:20:47.164 Temperature: OK 00:20:47.164 Device Reliability: OK 00:20:47.164 Read Only: No 00:20:47.164 Volatile Memory Backup: OK 00:20:47.164 Current Temperature: 323 Kelvin (50 Celsius) 00:20:47.164 Temperature Threshold: 343 Kelvin (70 Celsius) 00:20:47.164 Available Spare: 0% 00:20:47.164 Available Spare Threshold: 0% 00:20:47.164 Life Percentage Used: 0% 00:20:47.164 Data Units Read: 815 00:20:47.164 Data Units Written: 743 00:20:47.164 Host Read Commands: 37407 00:20:47.164 Host Write Commands: 37193 00:20:47.164 Controller Busy Time: 0 minutes 00:20:47.164 Power Cycles: 0 00:20:47.164 Power On Hours: 0 hours 00:20:47.164 Unsafe Shutdowns: 0 00:20:47.164 Unrecoverable Media Errors: 0 00:20:47.164 Lifetime Error Log Entries: 0 00:20:47.164 Warning Temperature Time: 0 minutes 00:20:47.164 Critical Temperature Time: 0 minutes 00:20:47.164 00:20:47.164 Number of Queues 00:20:47.164 ================ 00:20:47.164 Number of I/O Submission Queues: 64 00:20:47.164 Number of I/O Completion Queues: 64 00:20:47.164 00:20:47.164 ZNS Specific Controller Data 00:20:47.164 ============================ 00:20:47.164 Zone Append Size Limit: 0 00:20:47.164 00:20:47.164 00:20:47.164 Active Namespaces 00:20:47.164 ================= 00:20:47.164 Namespace ID:1 00:20:47.164 Error Recovery Timeout: Unlimited 00:20:47.164 Command Set Identifier: NVM (00h) 00:20:47.164 Deallocate: Supported 00:20:47.164 Deallocated/Unwritten Error: Supported 00:20:47.164 Deallocated Read Value: All 0x00 00:20:47.164 Deallocate in Write Zeroes: Not Supported 00:20:47.164 Deallocated Guard Field: 0xFFFF 00:20:47.164 Flush: Supported 00:20:47.164 Reservation: Not Supported 00:20:47.164 Metadata Transferred as: Separate Metadata Buffer 00:20:47.164 Namespace Sharing Capabilities: Private 00:20:47.164 Size (in LBAs): 1548666 (5GiB) 00:20:47.164 Capacity (in LBAs): 1548666 (5GiB) 00:20:47.164 Utilization (in LBAs): 1548666 (5GiB) 00:20:47.164 Thin Provisioning: Not Supported 00:20:47.164 Per-NS Atomic Units: No 00:20:47.164 Maximum Single Source Range Length: 128 00:20:47.164 Maximum Copy Length: 128 00:20:47.164 Maximum Source Range Count: 128 00:20:47.164 NGUID/EUI64 Never Reused: No 00:20:47.164 Namespace Write Protected: No 00:20:47.164 Number of LBA Formats: 8 00:20:47.164 Current LBA Format: LBA Format #07 00:20:47.164 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:47.164 LBA Format #01: Data Size: 512 Metadata Size: 8 00:20:47.164 LBA Format #02: Data Size: 512 Metadata Size: 16 00:20:47.164 LBA Format #03: Data Size: 512 Metadata Size: 64 00:20:47.164 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:20:47.164 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:20:47.164 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:20:47.164 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:20:47.164 00:20:47.164 NVM Specific Namespace Data 00:20:47.164 =========================== 00:20:47.164 Logical Block Storage Tag Mask: 0 00:20:47.164 Protection Information Capabilities: 00:20:47.164 16b Guard Protection Information Storage Tag Support: No 00:20:47.164 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:20:47.164 Storage Tag Check Read Support: No 00:20:47.164 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.164 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.164 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.164 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.164 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.164 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.165 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.165 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.165 ===================================================== 00:20:47.165 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:47.165 ===================================================== 00:20:47.165 Controller Capabilities/Features 00:20:47.165 ================================ 00:20:47.165 Vendor ID: 1b36 00:20:47.165 Subsystem Vendor ID: 1af4 00:20:47.165 Serial Number: 12341 00:20:47.165 Model Number: QEMU NVMe Ctrl 00:20:47.165 Firmware Version: 8.0.0 00:20:47.165 Recommended Arb Burst: 6 00:20:47.165 IEEE OUI Identifier: 00 54 52 00:20:47.165 Multi-path I/O 00:20:47.165 May have multiple subsystem ports: No 00:20:47.165 May have multiple controllers: No 00:20:47.165 Associated with SR-IOV VF: No 00:20:47.165 Max Data Transfer Size: 524288 00:20:47.165 Max Number of Namespaces: 256 00:20:47.165 Max Number of I/O Queues: 64 00:20:47.165 NVMe Specification Version (VS): 1.4 00:20:47.165 NVMe Specification Version (Identify): 1.4 00:20:47.165 Maximum Queue Entries: 2048 00:20:47.165 Contiguous Queues Required: Yes 00:20:47.165 Arbitration Mechanisms Supported 00:20:47.165 Weighted Round Robin: Not Supported 00:20:47.165 Vendor Specific: Not Supported 00:20:47.165 Reset Timeout: 7500 ms 00:20:47.165 Doorbell Stride: 4 bytes 00:20:47.165 NVM Subsystem Reset: Not Supported 00:20:47.165 Command Sets Supported 00:20:47.165 NVM Command Set: Supported 00:20:47.165 Boot Partition: Not Supported 00:20:47.165 Memory Page Size Minimum: 4096 bytes 00:20:47.165 Memory Page Size Maximum: 65536 bytes 00:20:47.165 Persistent Memory Region: Not Supported 00:20:47.165 Optional Asynchronous Events Supported 00:20:47.165 Namespace Attribute Notices: Supported 00:20:47.165 Firmware Activation Notices: Not Supported 00:20:47.165 ANA Change Notices: Not Supported 00:20:47.165 PLE Aggregate Log Change Notices: Not Supported 00:20:47.165 LBA Status Info Alert Notices: Not Supported 00:20:47.165 EGE Aggregate Log Change Notices: Not Supported 00:20:47.165 Normal NVM Subsystem Shutdown event: Not Supported 00:20:47.165 Zone Descriptor Change Notices: Not Supported 00:20:47.165 Discovery Log Change Notices: Not Supported 00:20:47.165 Controller Attributes 00:20:47.165 128-bit Host Identifier: Not Supported 00:20:47.165 Non-Operational Permissive Mode: Not Supported 00:20:47.165 NVM Sets: Not Supported 00:20:47.165 Read Recovery Levels: Not Supported 00:20:47.165 Endurance Groups: Not Supported 00:20:47.165 Predictable Latency Mode: Not Supported 00:20:47.165 Traffic Based Keep ALive: Not Supported 00:20:47.165 Namespace Granularity: Not Supported 00:20:47.165 SQ Associations: Not Supported 00:20:47.165 UUID List: Not Supported 00:20:47.165 Multi-Domain Subsystem: Not Supported 00:20:47.165 Fixed Capacity Management: Not Supported 00:20:47.165 Variable Capacity Management: Not Supported 00:20:47.165 Delete Endurance Group: Not Supported 00:20:47.165 Delete NVM Set: Not Supported 00:20:47.165 Extended LBA Formats Supported: Supported 00:20:47.165 Flexible Data Placement Supported: Not Supported 00:20:47.165 00:20:47.165 Controller Memory Buffer Support 00:20:47.165 ================================ 00:20:47.165 Supported: No 00:20:47.165 00:20:47.165 Persistent Memory Region Support 00:20:47.165 ================================ 00:20:47.165 Supported: No 00:20:47.165 00:20:47.165 Admin Command Set Attributes 00:20:47.165 ============================ 00:20:47.165 Security Send/Receive: Not Supported 00:20:47.165 Format NVM: Supported 00:20:47.165 Firmware Activate/Download: Not Supported 00:20:47.165 Namespace Management: Supported 00:20:47.165 Device Self-Test: Not Supported 00:20:47.165 Directives: Supported 00:20:47.165 NVMe-MI: Not Supported 00:20:47.165 Virtualization Management: Not Supported 00:20:47.165 Doorbell Buffer Config: Supported 00:20:47.165 Get LBA Status Capability: Not Supported 00:20:47.165 Command & Feature Lockdown Capability: Not Supported 00:20:47.165 Abort Command Limit: 4 00:20:47.165 Async Event Request Limit: 4 00:20:47.165 Number of Firmware Slots: N/A 00:20:47.165 Firmware Slot 1 Read-Only: N/A 00:20:47.165 Firmware Activation Without Reset: N/A 00:20:47.165 Multiple Update Detection Support: N/A 00:20:47.165 Firmware Update Granularity: No Information Provided 00:20:47.165 Per-Namespace SMART Log: Yes 00:20:47.165 Asymmetric Namespace Access Log Page: Not Supported 00:20:47.165 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:20:47.165 Command Effects Log Page: Supported 00:20:47.165 Get Log Page Extended Data: Supported 00:20:47.165 Telemetry Log Pages: Not Supported 00:20:47.165 Persistent Event Log Pages: Not Supported 00:20:47.165 Supported Log Pages Log Page: May Support 00:20:47.165 Commands Supported & Effects Log Page: Not Supported 00:20:47.165 Feature Identifiers & Effects Log Page:May Support 00:20:47.165 NVMe-MI Commands & Effects Log Page: May Support 00:20:47.165 Data Area 4 for Telemetry Log: Not Supported 00:20:47.165 Error Log Page Entries Supported: 1 00:20:47.165 Keep Alive: Not Supported 00:20:47.165 00:20:47.165 NVM Command Set Attributes 00:20:47.165 ========================== 00:20:47.165 Submission Queue Entry Size 00:20:47.165 Max: 64 00:20:47.165 Min: 64 00:20:47.165 Completion Queue Entry Size 00:20:47.165 Max: 16 00:20:47.165 Min: 16 00:20:47.165 Number of Namespaces: 256 00:20:47.165 Compare Command: Supported 00:20:47.165 Write Uncorrectable Command: Not Supported 00:20:47.165 Dataset Management Command: Supported 00:20:47.165 Write Zeroes Command: Supported 00:20:47.165 Set Features Save Field: Supported 00:20:47.165 Reservations: Not Supported 00:20:47.165 Timestamp: Supported 00:20:47.165 Copy: Supported 00:20:47.165 Volatile Write Cache: Present 00:20:47.165 Atomic Write Unit (Normal): 1 00:20:47.165 Atomic Write Unit (PFail): 1 00:20:47.165 Atomic Compare & Write Unit: 1 00:20:47.165 Fused Compare & Write: Not Supported 00:20:47.165 Scatter-Gather List 00:20:47.165 SGL Command Set: Supported 00:20:47.165 SGL Keyed: Not Supported 00:20:47.165 SGL Bit Bucket Descriptor: Not Supported 00:20:47.165 SGL Metadata Pointer: Not Supported 00:20:47.165 Oversized SGL: Not Supported 00:20:47.165 SGL Metadata Address: Not Supported 00:20:47.165 SGL Offset: Not Supported 00:20:47.165 Transport SGL Data Block: Not Supported 00:20:47.165 Replay Protected Memory Block: Not Supported 00:20:47.165 00:20:47.165 Firmware Slot Information 00:20:47.165 ========================= 00:20:47.165 Active slot: 1 00:20:47.165 Slot 1 Firmware Revision: 1.0 00:20:47.165 00:20:47.165 00:20:47.165 Commands Supported and Effects 00:20:47.165 ============================== 00:20:47.165 Admin Commands 00:20:47.165 -------------- 00:20:47.165 Delete I/O Submission Queue (00h): Supported 00:20:47.165 Create I/O Submission Queue (01h): Supported 00:20:47.165 Get Log Page (02h): Supported 00:20:47.165 Delete I/O Completion Queue (04h): Supported 00:20:47.165 Create I/O Completion Queue (05h): Supported 00:20:47.165 Identify (06h): Supported 00:20:47.165 Abort (08h): Supported 00:20:47.165 Set Features (09h): Supported 00:20:47.165 Get Features (0Ah): Supported 00:20:47.165 Asynchronous Event Request (0Ch): Supported 00:20:47.165 Namespace Attachment (15h): Supported NS-Inventory-Change 00:20:47.165 Directive Send (19h): Supported 00:20:47.165 Directive Receive (1Ah): Supported 00:20:47.165 Virtualization Management (1Ch): Supported 00:20:47.165 Doorbell Buffer Config (7Ch): Supported 00:20:47.165 Format NVM (80h): Supported LBA-Change 00:20:47.165 I/O Commands 00:20:47.165 ------------ 00:20:47.165 Flush (00h): Supported LBA-Change 00:20:47.165 Write (01h): Supported LBA-Change 00:20:47.165 Read (02h): Supported 00:20:47.165 Compare (05h): Supported 00:20:47.165 Write Zeroes (08h): Supported LBA-Change 00:20:47.165 Dataset Management (09h): Supported LBA-Change 00:20:47.165 Unknown (0Ch): Supported 00:20:47.165 Unknown (12h): Supported 00:20:47.165 Copy (19h): Supported LBA-Change 00:20:47.165 Unknown (1Dh): Supported LBA-Change 00:20:47.165 00:20:47.165 Error Log 00:20:47.165 ========= 00:20:47.165 00:20:47.165 Arbitration 00:20:47.165 =========== 00:20:47.165 Arbitration Burst: no limit 00:20:47.165 00:20:47.165 Power Management 00:20:47.165 ================ 00:20:47.165 Number of Power States: 1 00:20:47.165 Current Power State: Power State #0 00:20:47.165 Power State #0: 00:20:47.165 Max Power: 25.00 W 00:20:47.165 Non-Operational State: Operational 00:20:47.165 Entry Latency: 16 microseconds 00:20:47.165 Exit Latency: 4 microseconds 00:20:47.165 Relative Read Throughput: 0 00:20:47.165 Relative Read Latency: 0 00:20:47.165 Relative Write Throughput: 0 00:20:47.165 Relative Write Latency: 0 00:20:47.165 Idle Power: Not Reported 00:20:47.165 Active Power: Not Reported 00:20:47.165 Non-Operational Permissive Mode: Not Supported 00:20:47.165 00:20:47.165 Health Information 00:20:47.165 ================== 00:20:47.165 Critical Warnings: 00:20:47.166 Available Spare Space: OK 00:20:47.166 Temperature: [2024-10-17 16:35:23.435588] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 64099 terminated unexpected 00:20:47.166 OK 00:20:47.166 Device Reliability: OK 00:20:47.166 Read Only: No 00:20:47.166 Volatile Memory Backup: OK 00:20:47.166 Current Temperature: 323 Kelvin (50 Celsius) 00:20:47.166 Temperature Threshold: 343 Kelvin (70 Celsius) 00:20:47.166 Available Spare: 0% 00:20:47.166 Available Spare Threshold: 0% 00:20:47.166 Life Percentage Used: 0% 00:20:47.166 Data Units Read: 1178 00:20:47.166 Data Units Written: 1045 00:20:47.166 Host Read Commands: 54820 00:20:47.166 Host Write Commands: 53595 00:20:47.166 Controller Busy Time: 0 minutes 00:20:47.166 Power Cycles: 0 00:20:47.166 Power On Hours: 0 hours 00:20:47.166 Unsafe Shutdowns: 0 00:20:47.166 Unrecoverable Media Errors: 0 00:20:47.166 Lifetime Error Log Entries: 0 00:20:47.166 Warning Temperature Time: 0 minutes 00:20:47.166 Critical Temperature Time: 0 minutes 00:20:47.166 00:20:47.166 Number of Queues 00:20:47.166 ================ 00:20:47.166 Number of I/O Submission Queues: 64 00:20:47.166 Number of I/O Completion Queues: 64 00:20:47.166 00:20:47.166 ZNS Specific Controller Data 00:20:47.166 ============================ 00:20:47.166 Zone Append Size Limit: 0 00:20:47.166 00:20:47.166 00:20:47.166 Active Namespaces 00:20:47.166 ================= 00:20:47.166 Namespace ID:1 00:20:47.166 Error Recovery Timeout: Unlimited 00:20:47.166 Command Set Identifier: NVM (00h) 00:20:47.166 Deallocate: Supported 00:20:47.166 Deallocated/Unwritten Error: Supported 00:20:47.166 Deallocated Read Value: All 0x00 00:20:47.166 Deallocate in Write Zeroes: Not Supported 00:20:47.166 Deallocated Guard Field: 0xFFFF 00:20:47.166 Flush: Supported 00:20:47.166 Reservation: Not Supported 00:20:47.166 Namespace Sharing Capabilities: Private 00:20:47.166 Size (in LBAs): 1310720 (5GiB) 00:20:47.166 Capacity (in LBAs): 1310720 (5GiB) 00:20:47.166 Utilization (in LBAs): 1310720 (5GiB) 00:20:47.166 Thin Provisioning: Not Supported 00:20:47.166 Per-NS Atomic Units: No 00:20:47.166 Maximum Single Source Range Length: 128 00:20:47.166 Maximum Copy Length: 128 00:20:47.166 Maximum Source Range Count: 128 00:20:47.166 NGUID/EUI64 Never Reused: No 00:20:47.166 Namespace Write Protected: No 00:20:47.166 Number of LBA Formats: 8 00:20:47.166 Current LBA Format: LBA Format #04 00:20:47.166 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:47.166 LBA Format #01: Data Size: 512 Metadata Size: 8 00:20:47.166 LBA Format #02: Data Size: 512 Metadata Size: 16 00:20:47.166 LBA Format #03: Data Size: 512 Metadata Size: 64 00:20:47.166 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:20:47.166 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:20:47.166 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:20:47.166 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:20:47.166 00:20:47.166 NVM Specific Namespace Data 00:20:47.166 =========================== 00:20:47.166 Logical Block Storage Tag Mask: 0 00:20:47.166 Protection Information Capabilities: 00:20:47.166 16b Guard Protection Information Storage Tag Support: No 00:20:47.166 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:20:47.166 Storage Tag Check Read Support: No 00:20:47.166 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.166 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.166 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.166 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.166 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.166 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.166 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.166 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.166 ===================================================== 00:20:47.166 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:47.166 ===================================================== 00:20:47.166 Controller Capabilities/Features 00:20:47.166 ================================ 00:20:47.166 Vendor ID: 1b36 00:20:47.166 Subsystem Vendor ID: 1af4 00:20:47.166 Serial Number: 12343 00:20:47.166 Model Number: QEMU NVMe Ctrl 00:20:47.166 Firmware Version: 8.0.0 00:20:47.166 Recommended Arb Burst: 6 00:20:47.166 IEEE OUI Identifier: 00 54 52 00:20:47.166 Multi-path I/O 00:20:47.166 May have multiple subsystem ports: No 00:20:47.166 May have multiple controllers: Yes 00:20:47.166 Associated with SR-IOV VF: No 00:20:47.166 Max Data Transfer Size: 524288 00:20:47.166 Max Number of Namespaces: 256 00:20:47.166 Max Number of I/O Queues: 64 00:20:47.166 NVMe Specification Version (VS): 1.4 00:20:47.166 NVMe Specification Version (Identify): 1.4 00:20:47.166 Maximum Queue Entries: 2048 00:20:47.166 Contiguous Queues Required: Yes 00:20:47.166 Arbitration Mechanisms Supported 00:20:47.166 Weighted Round Robin: Not Supported 00:20:47.166 Vendor Specific: Not Supported 00:20:47.166 Reset Timeout: 7500 ms 00:20:47.166 Doorbell Stride: 4 bytes 00:20:47.166 NVM Subsystem Reset: Not Supported 00:20:47.166 Command Sets Supported 00:20:47.166 NVM Command Set: Supported 00:20:47.166 Boot Partition: Not Supported 00:20:47.166 Memory Page Size Minimum: 4096 bytes 00:20:47.166 Memory Page Size Maximum: 65536 bytes 00:20:47.166 Persistent Memory Region: Not Supported 00:20:47.166 Optional Asynchronous Events Supported 00:20:47.166 Namespace Attribute Notices: Supported 00:20:47.166 Firmware Activation Notices: Not Supported 00:20:47.166 ANA Change Notices: Not Supported 00:20:47.166 PLE Aggregate Log Change Notices: Not Supported 00:20:47.166 LBA Status Info Alert Notices: Not Supported 00:20:47.166 EGE Aggregate Log Change Notices: Not Supported 00:20:47.166 Normal NVM Subsystem Shutdown event: Not Supported 00:20:47.166 Zone Descriptor Change Notices: Not Supported 00:20:47.166 Discovery Log Change Notices: Not Supported 00:20:47.166 Controller Attributes 00:20:47.166 128-bit Host Identifier: Not Supported 00:20:47.166 Non-Operational Permissive Mode: Not Supported 00:20:47.166 NVM Sets: Not Supported 00:20:47.166 Read Recovery Levels: Not Supported 00:20:47.166 Endurance Groups: Supported 00:20:47.166 Predictable Latency Mode: Not Supported 00:20:47.166 Traffic Based Keep ALive: Not Supported 00:20:47.166 Namespace Granularity: Not Supported 00:20:47.166 SQ Associations: Not Supported 00:20:47.166 UUID List: Not Supported 00:20:47.166 Multi-Domain Subsystem: Not Supported 00:20:47.166 Fixed Capacity Management: Not Supported 00:20:47.166 Variable Capacity Management: Not Supported 00:20:47.166 Delete Endurance Group: Not Supported 00:20:47.166 Delete NVM Set: Not Supported 00:20:47.166 Extended LBA Formats Supported: Supported 00:20:47.166 Flexible Data Placement Supported: Supported 00:20:47.166 00:20:47.166 Controller Memory Buffer Support 00:20:47.166 ================================ 00:20:47.166 Supported: No 00:20:47.166 00:20:47.166 Persistent Memory Region Support 00:20:47.166 ================================ 00:20:47.166 Supported: No 00:20:47.166 00:20:47.166 Admin Command Set Attributes 00:20:47.166 ============================ 00:20:47.166 Security Send/Receive: Not Supported 00:20:47.166 Format NVM: Supported 00:20:47.166 Firmware Activate/Download: Not Supported 00:20:47.166 Namespace Management: Supported 00:20:47.166 Device Self-Test: Not Supported 00:20:47.166 Directives: Supported 00:20:47.166 NVMe-MI: Not Supported 00:20:47.166 Virtualization Management: Not Supported 00:20:47.166 Doorbell Buffer Config: Supported 00:20:47.166 Get LBA Status Capability: Not Supported 00:20:47.166 Command & Feature Lockdown Capability: Not Supported 00:20:47.166 Abort Command Limit: 4 00:20:47.166 Async Event Request Limit: 4 00:20:47.166 Number of Firmware Slots: N/A 00:20:47.166 Firmware Slot 1 Read-Only: N/A 00:20:47.166 Firmware Activation Without Reset: N/A 00:20:47.166 Multiple Update Detection Support: N/A 00:20:47.166 Firmware Update Granularity: No Information Provided 00:20:47.166 Per-Namespace SMART Log: Yes 00:20:47.166 Asymmetric Namespace Access Log Page: Not Supported 00:20:47.166 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:20:47.166 Command Effects Log Page: Supported 00:20:47.166 Get Log Page Extended Data: Supported 00:20:47.166 Telemetry Log Pages: Not Supported 00:20:47.166 Persistent Event Log Pages: Not Supported 00:20:47.166 Supported Log Pages Log Page: May Support 00:20:47.166 Commands Supported & Effects Log Page: Not Supported 00:20:47.166 Feature Identifiers & Effects Log Page:May Support 00:20:47.166 NVMe-MI Commands & Effects Log Page: May Support 00:20:47.166 Data Area 4 for Telemetry Log: Not Supported 00:20:47.166 Error Log Page Entries Supported: 1 00:20:47.166 Keep Alive: Not Supported 00:20:47.166 00:20:47.166 NVM Command Set Attributes 00:20:47.166 ========================== 00:20:47.166 Submission Queue Entry Size 00:20:47.166 Max: 64 00:20:47.166 Min: 64 00:20:47.166 Completion Queue Entry Size 00:20:47.166 Max: 16 00:20:47.166 Min: 16 00:20:47.166 Number of Namespaces: 256 00:20:47.166 Compare Command: Supported 00:20:47.166 Write Uncorrectable Command: Not Supported 00:20:47.166 Dataset Management Command: Supported 00:20:47.166 Write Zeroes Command: Supported 00:20:47.166 Set Features Save Field: Supported 00:20:47.166 Reservations: Not Supported 00:20:47.166 Timestamp: Supported 00:20:47.167 Copy: Supported 00:20:47.167 Volatile Write Cache: Present 00:20:47.167 Atomic Write Unit (Normal): 1 00:20:47.167 Atomic Write Unit (PFail): 1 00:20:47.167 Atomic Compare & Write Unit: 1 00:20:47.167 Fused Compare & Write: Not Supported 00:20:47.167 Scatter-Gather List 00:20:47.167 SGL Command Set: Supported 00:20:47.167 SGL Keyed: Not Supported 00:20:47.167 SGL Bit Bucket Descriptor: Not Supported 00:20:47.167 SGL Metadata Pointer: Not Supported 00:20:47.167 Oversized SGL: Not Supported 00:20:47.167 SGL Metadata Address: Not Supported 00:20:47.167 SGL Offset: Not Supported 00:20:47.167 Transport SGL Data Block: Not Supported 00:20:47.167 Replay Protected Memory Block: Not Supported 00:20:47.167 00:20:47.167 Firmware Slot Information 00:20:47.167 ========================= 00:20:47.167 Active slot: 1 00:20:47.167 Slot 1 Firmware Revision: 1.0 00:20:47.167 00:20:47.167 00:20:47.167 Commands Supported and Effects 00:20:47.167 ============================== 00:20:47.167 Admin Commands 00:20:47.167 -------------- 00:20:47.167 Delete I/O Submission Queue (00h): Supported 00:20:47.167 Create I/O Submission Queue (01h): Supported 00:20:47.167 Get Log Page (02h): Supported 00:20:47.167 Delete I/O Completion Queue (04h): Supported 00:20:47.167 Create I/O Completion Queue (05h): Supported 00:20:47.167 Identify (06h): Supported 00:20:47.167 Abort (08h): Supported 00:20:47.167 Set Features (09h): Supported 00:20:47.167 Get Features (0Ah): Supported 00:20:47.167 Asynchronous Event Request (0Ch): Supported 00:20:47.167 Namespace Attachment (15h): Supported NS-Inventory-Change 00:20:47.167 Directive Send (19h): Supported 00:20:47.167 Directive Receive (1Ah): Supported 00:20:47.167 Virtualization Management (1Ch): Supported 00:20:47.167 Doorbell Buffer Config (7Ch): Supported 00:20:47.167 Format NVM (80h): Supported LBA-Change 00:20:47.167 I/O Commands 00:20:47.167 ------------ 00:20:47.167 Flush (00h): Supported LBA-Change 00:20:47.167 Write (01h): Supported LBA-Change 00:20:47.167 Read (02h): Supported 00:20:47.167 Compare (05h): Supported 00:20:47.167 Write Zeroes (08h): Supported LBA-Change 00:20:47.167 Dataset Management (09h): Supported LBA-Change 00:20:47.167 Unknown (0Ch): Supported 00:20:47.167 Unknown (12h): Supported 00:20:47.167 Copy (19h): Supported LBA-Change 00:20:47.167 Unknown (1Dh): Supported LBA-Change 00:20:47.167 00:20:47.167 Error Log 00:20:47.167 ========= 00:20:47.167 00:20:47.167 Arbitration 00:20:47.167 =========== 00:20:47.167 Arbitration Burst: no limit 00:20:47.167 00:20:47.167 Power Management 00:20:47.167 ================ 00:20:47.167 Number of Power States: 1 00:20:47.167 Current Power State: Power State #0 00:20:47.167 Power State #0: 00:20:47.167 Max Power: 25.00 W 00:20:47.167 Non-Operational State: Operational 00:20:47.167 Entry Latency: 16 microseconds 00:20:47.167 Exit Latency: 4 microseconds 00:20:47.167 Relative Read Throughput: 0 00:20:47.167 Relative Read Latency: 0 00:20:47.167 Relative Write Throughput: 0 00:20:47.167 Relative Write Latency: 0 00:20:47.167 Idle Power: Not Reported 00:20:47.167 Active Power: Not Reported 00:20:47.167 Non-Operational Permissive Mode: Not Supported 00:20:47.167 00:20:47.167 Health Information 00:20:47.167 ================== 00:20:47.167 Critical Warnings: 00:20:47.167 Available Spare Space: OK 00:20:47.167 Temperature: OK 00:20:47.167 Device Reliability: OK 00:20:47.167 Read Only: No 00:20:47.167 Volatile Memory Backup: OK 00:20:47.167 Current Temperature: 323 Kelvin (50 Celsius) 00:20:47.167 Temperature Threshold: 343 Kelvin (70 Celsius) 00:20:47.167 Available Spare: 0% 00:20:47.167 Available Spare Threshold: 0% 00:20:47.167 Life Percentage Used: 0% 00:20:47.167 Data Units Read: 997 00:20:47.167 Data Units Written: 926 00:20:47.167 Host Read Commands: 39189 00:20:47.167 Host Write Commands: 38612 00:20:47.167 Controller Busy Time: 0 minutes 00:20:47.167 Power Cycles: 0 00:20:47.167 Power On Hours: 0 hours 00:20:47.167 Unsafe Shutdowns: 0 00:20:47.167 Unrecoverable Media Errors: 0 00:20:47.167 Lifetime Error Log Entries: 0 00:20:47.167 Warning Temperature Time: 0 minutes 00:20:47.167 Critical Temperature Time: 0 minutes 00:20:47.167 00:20:47.167 Number of Queues 00:20:47.167 ================ 00:20:47.167 Number of I/O Submission Queues: 64 00:20:47.167 Number of I/O Completion Queues: 64 00:20:47.167 00:20:47.167 ZNS Specific Controller Data 00:20:47.167 ============================ 00:20:47.167 Zone Append Size Limit: 0 00:20:47.167 00:20:47.167 00:20:47.167 Active Namespaces 00:20:47.167 ================= 00:20:47.167 Namespace ID:1 00:20:47.167 Error Recovery Timeout: Unlimited 00:20:47.167 Command Set Identifier: NVM (00h) 00:20:47.167 Deallocate: Supported 00:20:47.167 Deallocated/Unwritten Error: Supported 00:20:47.167 Deallocated Read Value: All 0x00 00:20:47.167 Deallocate in Write Zeroes: Not Supported 00:20:47.167 Deallocated Guard Field: 0xFFFF 00:20:47.167 Flush: Supported 00:20:47.167 Reservation: Not Supported 00:20:47.167 Namespace Sharing Capabilities: Multiple Controllers 00:20:47.167 Size (in LBAs): 262144 (1GiB) 00:20:47.167 Capacity (in LBAs): 262144 (1GiB) 00:20:47.167 Utilization (in LBAs): 262144 (1GiB) 00:20:47.167 Thin Provisioning: Not Supported 00:20:47.167 Per-NS Atomic Units: No 00:20:47.167 Maximum Single Source Range Length: 128 00:20:47.167 Maximum Copy Length: 128 00:20:47.167 Maximum Source Range Count: 128 00:20:47.167 NGUID/EUI64 Never Reused: No 00:20:47.167 Namespace Write Protected: No 00:20:47.167 Endurance group ID: 1 00:20:47.167 Number of LBA Formats: 8 00:20:47.167 Current LBA Format: LBA Format #04 00:20:47.167 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:47.167 LBA Format #01: Data Size: 512 Metadata Size: 8 00:20:47.167 LBA Format #02: Data Size: 512 Metadata Size: 16 00:20:47.167 LBA Format #03: Data Size: 512 Metadata Size: 64 00:20:47.167 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:20:47.167 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:20:47.167 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:20:47.167 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:20:47.167 00:20:47.167 Get Feature FDP: 00:20:47.167 ================ 00:20:47.167 Enabled: Yes 00:20:47.167 FDP configuration index: 0 00:20:47.167 00:20:47.167 FDP configurations log page 00:20:47.167 =========================== 00:20:47.167 Number of FDP configurations: 1 00:20:47.167 Version: 0 00:20:47.167 Size: 112 00:20:47.167 FDP Configuration Descriptor: 0 00:20:47.167 Descriptor Size: 96 00:20:47.167 Reclaim Group Identifier format: 2 00:20:47.167 FDP Volatile Write Cache: Not Present 00:20:47.167 FDP Configuration: Valid 00:20:47.167 Vendor Specific Size: 0 00:20:47.167 Number of Reclaim Groups: 2 00:20:47.167 Number of Recalim Unit Handles: 8 00:20:47.167 Max Placement Identifiers: 128 00:20:47.167 Number of Namespaces Suppprted: 256 00:20:47.167 Reclaim unit Nominal Size: 6000000 bytes 00:20:47.167 Estimated Reclaim Unit Time Limit: Not Reported 00:20:47.167 RUH Desc #000: RUH Type: Initially Isolated 00:20:47.167 RUH Desc #001: RUH Type: Initially Isolated 00:20:47.167 RUH Desc #002: RUH Type: Initially Isolated 00:20:47.167 RUH Desc #003: RUH Type: Initially Isolated 00:20:47.167 RUH Desc #004: RUH Type: Initially Isolated 00:20:47.167 RUH Desc #005: RUH Type: Initially Isolated 00:20:47.167 RUH Desc #006: RUH Type: Initially Isolated 00:20:47.167 RUH Desc #007: RUH Type: Initially Isolated 00:20:47.167 00:20:47.167 FDP reclaim unit handle usage log page 00:20:47.167 ====================================== 00:20:47.167 Number of Reclaim Unit Handles: 8 00:20:47.167 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:20:47.167 RUH Usage Desc #001: RUH Attributes: Unused 00:20:47.167 RUH Usage Desc #002: RUH Attributes: Unused 00:20:47.167 RUH Usage Desc #003: RUH Attributes: Unused 00:20:47.167 RUH Usage Desc #004: RUH Attributes: Unused 00:20:47.167 RUH Usage Desc #005: RUH Attributes: Unused 00:20:47.167 RUH Usage Desc #006: RUH Attributes: Unused 00:20:47.167 RUH Usage Desc #007: RUH Attributes: Unused 00:20:47.167 00:20:47.167 FDP statistics log page 00:20:47.167 ======================= 00:20:47.167 Host bytes with metadata written: 576823296 00:20:47.167 Med[2024-10-17 16:35:23.437391] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 64099 terminated unexpected 00:20:47.167 ia bytes with metadata written: 576901120 00:20:47.167 Media bytes erased: 0 00:20:47.167 00:20:47.167 FDP events log page 00:20:47.167 =================== 00:20:47.167 Number of FDP events: 0 00:20:47.167 00:20:47.167 NVM Specific Namespace Data 00:20:47.167 =========================== 00:20:47.167 Logical Block Storage Tag Mask: 0 00:20:47.167 Protection Information Capabilities: 00:20:47.167 16b Guard Protection Information Storage Tag Support: No 00:20:47.167 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:20:47.167 Storage Tag Check Read Support: No 00:20:47.167 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.167 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.167 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.167 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.167 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.168 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.168 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.168 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.168 ===================================================== 00:20:47.168 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:47.168 ===================================================== 00:20:47.168 Controller Capabilities/Features 00:20:47.168 ================================ 00:20:47.168 Vendor ID: 1b36 00:20:47.168 Subsystem Vendor ID: 1af4 00:20:47.168 Serial Number: 12342 00:20:47.168 Model Number: QEMU NVMe Ctrl 00:20:47.168 Firmware Version: 8.0.0 00:20:47.168 Recommended Arb Burst: 6 00:20:47.168 IEEE OUI Identifier: 00 54 52 00:20:47.168 Multi-path I/O 00:20:47.168 May have multiple subsystem ports: No 00:20:47.168 May have multiple controllers: No 00:20:47.168 Associated with SR-IOV VF: No 00:20:47.168 Max Data Transfer Size: 524288 00:20:47.168 Max Number of Namespaces: 256 00:20:47.168 Max Number of I/O Queues: 64 00:20:47.168 NVMe Specification Version (VS): 1.4 00:20:47.168 NVMe Specification Version (Identify): 1.4 00:20:47.168 Maximum Queue Entries: 2048 00:20:47.168 Contiguous Queues Required: Yes 00:20:47.168 Arbitration Mechanisms Supported 00:20:47.168 Weighted Round Robin: Not Supported 00:20:47.168 Vendor Specific: Not Supported 00:20:47.168 Reset Timeout: 7500 ms 00:20:47.168 Doorbell Stride: 4 bytes 00:20:47.168 NVM Subsystem Reset: Not Supported 00:20:47.168 Command Sets Supported 00:20:47.168 NVM Command Set: Supported 00:20:47.168 Boot Partition: Not Supported 00:20:47.168 Memory Page Size Minimum: 4096 bytes 00:20:47.168 Memory Page Size Maximum: 65536 bytes 00:20:47.168 Persistent Memory Region: Not Supported 00:20:47.168 Optional Asynchronous Events Supported 00:20:47.168 Namespace Attribute Notices: Supported 00:20:47.168 Firmware Activation Notices: Not Supported 00:20:47.168 ANA Change Notices: Not Supported 00:20:47.168 PLE Aggregate Log Change Notices: Not Supported 00:20:47.168 LBA Status Info Alert Notices: Not Supported 00:20:47.168 EGE Aggregate Log Change Notices: Not Supported 00:20:47.168 Normal NVM Subsystem Shutdown event: Not Supported 00:20:47.168 Zone Descriptor Change Notices: Not Supported 00:20:47.168 Discovery Log Change Notices: Not Supported 00:20:47.168 Controller Attributes 00:20:47.168 128-bit Host Identifier: Not Supported 00:20:47.168 Non-Operational Permissive Mode: Not Supported 00:20:47.168 NVM Sets: Not Supported 00:20:47.168 Read Recovery Levels: Not Supported 00:20:47.168 Endurance Groups: Not Supported 00:20:47.168 Predictable Latency Mode: Not Supported 00:20:47.168 Traffic Based Keep ALive: Not Supported 00:20:47.168 Namespace Granularity: Not Supported 00:20:47.168 SQ Associations: Not Supported 00:20:47.168 UUID List: Not Supported 00:20:47.168 Multi-Domain Subsystem: Not Supported 00:20:47.168 Fixed Capacity Management: Not Supported 00:20:47.168 Variable Capacity Management: Not Supported 00:20:47.168 Delete Endurance Group: Not Supported 00:20:47.168 Delete NVM Set: Not Supported 00:20:47.168 Extended LBA Formats Supported: Supported 00:20:47.168 Flexible Data Placement Supported: Not Supported 00:20:47.168 00:20:47.168 Controller Memory Buffer Support 00:20:47.168 ================================ 00:20:47.168 Supported: No 00:20:47.168 00:20:47.168 Persistent Memory Region Support 00:20:47.168 ================================ 00:20:47.168 Supported: No 00:20:47.168 00:20:47.168 Admin Command Set Attributes 00:20:47.168 ============================ 00:20:47.168 Security Send/Receive: Not Supported 00:20:47.168 Format NVM: Supported 00:20:47.168 Firmware Activate/Download: Not Supported 00:20:47.168 Namespace Management: Supported 00:20:47.168 Device Self-Test: Not Supported 00:20:47.168 Directives: Supported 00:20:47.168 NVMe-MI: Not Supported 00:20:47.168 Virtualization Management: Not Supported 00:20:47.168 Doorbell Buffer Config: Supported 00:20:47.168 Get LBA Status Capability: Not Supported 00:20:47.168 Command & Feature Lockdown Capability: Not Supported 00:20:47.168 Abort Command Limit: 4 00:20:47.168 Async Event Request Limit: 4 00:20:47.168 Number of Firmware Slots: N/A 00:20:47.168 Firmware Slot 1 Read-Only: N/A 00:20:47.168 Firmware Activation Without Reset: N/A 00:20:47.168 Multiple Update Detection Support: N/A 00:20:47.168 Firmware Update Granularity: No Information Provided 00:20:47.168 Per-Namespace SMART Log: Yes 00:20:47.168 Asymmetric Namespace Access Log Page: Not Supported 00:20:47.168 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:20:47.168 Command Effects Log Page: Supported 00:20:47.168 Get Log Page Extended Data: Supported 00:20:47.168 Telemetry Log Pages: Not Supported 00:20:47.168 Persistent Event Log Pages: Not Supported 00:20:47.168 Supported Log Pages Log Page: May Support 00:20:47.168 Commands Supported & Effects Log Page: Not Supported 00:20:47.168 Feature Identifiers & Effects Log Page:May Support 00:20:47.168 NVMe-MI Commands & Effects Log Page: May Support 00:20:47.168 Data Area 4 for Telemetry Log: Not Supported 00:20:47.168 Error Log Page Entries Supported: 1 00:20:47.168 Keep Alive: Not Supported 00:20:47.168 00:20:47.168 NVM Command Set Attributes 00:20:47.168 ========================== 00:20:47.168 Submission Queue Entry Size 00:20:47.168 Max: 64 00:20:47.168 Min: 64 00:20:47.168 Completion Queue Entry Size 00:20:47.168 Max: 16 00:20:47.168 Min: 16 00:20:47.168 Number of Namespaces: 256 00:20:47.168 Compare Command: Supported 00:20:47.168 Write Uncorrectable Command: Not Supported 00:20:47.168 Dataset Management Command: Supported 00:20:47.168 Write Zeroes Command: Supported 00:20:47.168 Set Features Save Field: Supported 00:20:47.168 Reservations: Not Supported 00:20:47.168 Timestamp: Supported 00:20:47.168 Copy: Supported 00:20:47.168 Volatile Write Cache: Present 00:20:47.168 Atomic Write Unit (Normal): 1 00:20:47.168 Atomic Write Unit (PFail): 1 00:20:47.168 Atomic Compare & Write Unit: 1 00:20:47.168 Fused Compare & Write: Not Supported 00:20:47.168 Scatter-Gather List 00:20:47.168 SGL Command Set: Supported 00:20:47.168 SGL Keyed: Not Supported 00:20:47.168 SGL Bit Bucket Descriptor: Not Supported 00:20:47.168 SGL Metadata Pointer: Not Supported 00:20:47.168 Oversized SGL: Not Supported 00:20:47.168 SGL Metadata Address: Not Supported 00:20:47.168 SGL Offset: Not Supported 00:20:47.168 Transport SGL Data Block: Not Supported 00:20:47.168 Replay Protected Memory Block: Not Supported 00:20:47.168 00:20:47.168 Firmware Slot Information 00:20:47.168 ========================= 00:20:47.168 Active slot: 1 00:20:47.168 Slot 1 Firmware Revision: 1.0 00:20:47.168 00:20:47.168 00:20:47.168 Commands Supported and Effects 00:20:47.168 ============================== 00:20:47.168 Admin Commands 00:20:47.168 -------------- 00:20:47.168 Delete I/O Submission Queue (00h): Supported 00:20:47.168 Create I/O Submission Queue (01h): Supported 00:20:47.168 Get Log Page (02h): Supported 00:20:47.168 Delete I/O Completion Queue (04h): Supported 00:20:47.168 Create I/O Completion Queue (05h): Supported 00:20:47.168 Identify (06h): Supported 00:20:47.168 Abort (08h): Supported 00:20:47.168 Set Features (09h): Supported 00:20:47.168 Get Features (0Ah): Supported 00:20:47.168 Asynchronous Event Request (0Ch): Supported 00:20:47.168 Namespace Attachment (15h): Supported NS-Inventory-Change 00:20:47.168 Directive Send (19h): Supported 00:20:47.168 Directive Receive (1Ah): Supported 00:20:47.168 Virtualization Management (1Ch): Supported 00:20:47.168 Doorbell Buffer Config (7Ch): Supported 00:20:47.168 Format NVM (80h): Supported LBA-Change 00:20:47.168 I/O Commands 00:20:47.168 ------------ 00:20:47.168 Flush (00h): Supported LBA-Change 00:20:47.168 Write (01h): Supported LBA-Change 00:20:47.169 Read (02h): Supported 00:20:47.169 Compare (05h): Supported 00:20:47.169 Write Zeroes (08h): Supported LBA-Change 00:20:47.169 Dataset Management (09h): Supported LBA-Change 00:20:47.169 Unknown (0Ch): Supported 00:20:47.169 Unknown (12h): Supported 00:20:47.169 Copy (19h): Supported LBA-Change 00:20:47.169 Unknown (1Dh): Supported LBA-Change 00:20:47.169 00:20:47.169 Error Log 00:20:47.169 ========= 00:20:47.169 00:20:47.169 Arbitration 00:20:47.169 =========== 00:20:47.169 Arbitration Burst: no limit 00:20:47.169 00:20:47.169 Power Management 00:20:47.169 ================ 00:20:47.169 Number of Power States: 1 00:20:47.169 Current Power State: Power State #0 00:20:47.169 Power State #0: 00:20:47.169 Max Power: 25.00 W 00:20:47.169 Non-Operational State: Operational 00:20:47.169 Entry Latency: 16 microseconds 00:20:47.169 Exit Latency: 4 microseconds 00:20:47.169 Relative Read Throughput: 0 00:20:47.169 Relative Read Latency: 0 00:20:47.169 Relative Write Throughput: 0 00:20:47.169 Relative Write Latency: 0 00:20:47.169 Idle Power: Not Reported 00:20:47.169 Active Power: Not Reported 00:20:47.169 Non-Operational Permissive Mode: Not Supported 00:20:47.169 00:20:47.169 Health Information 00:20:47.169 ================== 00:20:47.169 Critical Warnings: 00:20:47.169 Available Spare Space: OK 00:20:47.169 Temperature: OK 00:20:47.169 Device Reliability: OK 00:20:47.169 Read Only: No 00:20:47.169 Volatile Memory Backup: OK 00:20:47.169 Current Temperature: 323 Kelvin (50 Celsius) 00:20:47.169 Temperature Threshold: 343 Kelvin (70 Celsius) 00:20:47.169 Available Spare: 0% 00:20:47.169 Available Spare Threshold: 0% 00:20:47.169 Life Percentage Used: 0% 00:20:47.169 Data Units Read: 2559 00:20:47.169 Data Units Written: 2346 00:20:47.169 Host Read Commands: 113949 00:20:47.169 Host Write Commands: 112218 00:20:47.169 Controller Busy Time: 0 minutes 00:20:47.169 Power Cycles: 0 00:20:47.169 Power On Hours: 0 hours 00:20:47.169 Unsafe Shutdowns: 0 00:20:47.169 Unrecoverable Media Errors: 0 00:20:47.169 Lifetime Error Log Entries: 0 00:20:47.169 Warning Temperature Time: 0 minutes 00:20:47.169 Critical Temperature Time: 0 minutes 00:20:47.169 00:20:47.169 Number of Queues 00:20:47.169 ================ 00:20:47.169 Number of I/O Submission Queues: 64 00:20:47.169 Number of I/O Completion Queues: 64 00:20:47.169 00:20:47.169 ZNS Specific Controller Data 00:20:47.169 ============================ 00:20:47.169 Zone Append Size Limit: 0 00:20:47.169 00:20:47.169 00:20:47.169 Active Namespaces 00:20:47.169 ================= 00:20:47.169 Namespace ID:1 00:20:47.169 Error Recovery Timeout: Unlimited 00:20:47.169 Command Set Identifier: NVM (00h) 00:20:47.169 Deallocate: Supported 00:20:47.169 Deallocated/Unwritten Error: Supported 00:20:47.169 Deallocated Read Value: All 0x00 00:20:47.169 Deallocate in Write Zeroes: Not Supported 00:20:47.169 Deallocated Guard Field: 0xFFFF 00:20:47.169 Flush: Supported 00:20:47.169 Reservation: Not Supported 00:20:47.169 Namespace Sharing Capabilities: Private 00:20:47.169 Size (in LBAs): 1048576 (4GiB) 00:20:47.169 Capacity (in LBAs): 1048576 (4GiB) 00:20:47.169 Utilization (in LBAs): 1048576 (4GiB) 00:20:47.169 Thin Provisioning: Not Supported 00:20:47.169 Per-NS Atomic Units: No 00:20:47.169 Maximum Single Source Range Length: 128 00:20:47.169 Maximum Copy Length: 128 00:20:47.169 Maximum Source Range Count: 128 00:20:47.169 NGUID/EUI64 Never Reused: No 00:20:47.169 Namespace Write Protected: No 00:20:47.169 Number of LBA Formats: 8 00:20:47.169 Current LBA Format: LBA Format #04 00:20:47.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:47.169 LBA Format #01: Data Size: 512 Metadata Size: 8 00:20:47.169 LBA Format #02: Data Size: 512 Metadata Size: 16 00:20:47.169 LBA Format #03: Data Size: 512 Metadata Size: 64 00:20:47.169 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:20:47.169 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:20:47.169 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:20:47.169 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:20:47.169 00:20:47.169 NVM Specific Namespace Data 00:20:47.169 =========================== 00:20:47.169 Logical Block Storage Tag Mask: 0 00:20:47.169 Protection Information Capabilities: 00:20:47.169 16b Guard Protection Information Storage Tag Support: No 00:20:47.169 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:20:47.169 Storage Tag Check Read Support: No 00:20:47.169 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Namespace ID:2 00:20:47.169 Error Recovery Timeout: Unlimited 00:20:47.169 Command Set Identifier: NVM (00h) 00:20:47.169 Deallocate: Supported 00:20:47.169 Deallocated/Unwritten Error: Supported 00:20:47.169 Deallocated Read Value: All 0x00 00:20:47.169 Deallocate in Write Zeroes: Not Supported 00:20:47.169 Deallocated Guard Field: 0xFFFF 00:20:47.169 Flush: Supported 00:20:47.169 Reservation: Not Supported 00:20:47.169 Namespace Sharing Capabilities: Private 00:20:47.169 Size (in LBAs): 1048576 (4GiB) 00:20:47.169 Capacity (in LBAs): 1048576 (4GiB) 00:20:47.169 Utilization (in LBAs): 1048576 (4GiB) 00:20:47.169 Thin Provisioning: Not Supported 00:20:47.169 Per-NS Atomic Units: No 00:20:47.169 Maximum Single Source Range Length: 128 00:20:47.169 Maximum Copy Length: 128 00:20:47.169 Maximum Source Range Count: 128 00:20:47.169 NGUID/EUI64 Never Reused: No 00:20:47.169 Namespace Write Protected: No 00:20:47.169 Number of LBA Formats: 8 00:20:47.169 Current LBA Format: LBA Format #04 00:20:47.169 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:47.169 LBA Format #01: Data Size: 512 Metadata Size: 8 00:20:47.169 LBA Format #02: Data Size: 512 Metadata Size: 16 00:20:47.169 LBA Format #03: Data Size: 512 Metadata Size: 64 00:20:47.169 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:20:47.169 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:20:47.169 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:20:47.169 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:20:47.169 00:20:47.169 NVM Specific Namespace Data 00:20:47.169 =========================== 00:20:47.169 Logical Block Storage Tag Mask: 0 00:20:47.169 Protection Information Capabilities: 00:20:47.169 16b Guard Protection Information Storage Tag Support: No 00:20:47.169 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:20:47.169 Storage Tag Check Read Support: No 00:20:47.169 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.169 Namespace ID:3 00:20:47.169 Error Recovery Timeout: Unlimited 00:20:47.169 Command Set Identifier: NVM (00h) 00:20:47.169 Deallocate: Supported 00:20:47.169 Deallocated/Unwritten Error: Supported 00:20:47.169 Deallocated Read Value: All 0x00 00:20:47.169 Deallocate in Write Zeroes: Not Supported 00:20:47.169 Deallocated Guard Field: 0xFFFF 00:20:47.169 Flush: Supported 00:20:47.169 Reservation: Not Supported 00:20:47.169 Namespace Sharing Capabilities: Private 00:20:47.169 Size (in LBAs): 1048576 (4GiB) 00:20:47.429 Capacity (in LBAs): 1048576 (4GiB) 00:20:47.429 Utilization (in LBAs): 1048576 (4GiB) 00:20:47.429 Thin Provisioning: Not Supported 00:20:47.429 Per-NS Atomic Units: No 00:20:47.429 Maximum Single Source Range Length: 128 00:20:47.429 Maximum Copy Length: 128 00:20:47.429 Maximum Source Range Count: 128 00:20:47.429 NGUID/EUI64 Never Reused: No 00:20:47.429 Namespace Write Protected: No 00:20:47.429 Number of LBA Formats: 8 00:20:47.429 Current LBA Format: LBA Format #04 00:20:47.429 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:47.429 LBA Format #01: Data Size: 512 Metadata Size: 8 00:20:47.429 LBA Format #02: Data Size: 512 Metadata Size: 16 00:20:47.429 LBA Format #03: Data Size: 512 Metadata Size: 64 00:20:47.429 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:20:47.429 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:20:47.429 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:20:47.429 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:20:47.429 00:20:47.429 NVM Specific Namespace Data 00:20:47.429 =========================== 00:20:47.429 Logical Block Storage Tag Mask: 0 00:20:47.429 Protection Information Capabilities: 00:20:47.429 16b Guard Protection Information Storage Tag Support: No 00:20:47.429 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:20:47.429 Storage Tag Check Read Support: No 00:20:47.429 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.429 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.429 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.429 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.429 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.429 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.429 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.429 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.429 16:35:23 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:20:47.429 16:35:23 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:20:47.688 ===================================================== 00:20:47.689 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:47.689 ===================================================== 00:20:47.689 Controller Capabilities/Features 00:20:47.689 ================================ 00:20:47.689 Vendor ID: 1b36 00:20:47.689 Subsystem Vendor ID: 1af4 00:20:47.689 Serial Number: 12340 00:20:47.689 Model Number: QEMU NVMe Ctrl 00:20:47.689 Firmware Version: 8.0.0 00:20:47.689 Recommended Arb Burst: 6 00:20:47.689 IEEE OUI Identifier: 00 54 52 00:20:47.689 Multi-path I/O 00:20:47.689 May have multiple subsystem ports: No 00:20:47.689 May have multiple controllers: No 00:20:47.689 Associated with SR-IOV VF: No 00:20:47.689 Max Data Transfer Size: 524288 00:20:47.689 Max Number of Namespaces: 256 00:20:47.689 Max Number of I/O Queues: 64 00:20:47.689 NVMe Specification Version (VS): 1.4 00:20:47.689 NVMe Specification Version (Identify): 1.4 00:20:47.689 Maximum Queue Entries: 2048 00:20:47.689 Contiguous Queues Required: Yes 00:20:47.689 Arbitration Mechanisms Supported 00:20:47.689 Weighted Round Robin: Not Supported 00:20:47.689 Vendor Specific: Not Supported 00:20:47.689 Reset Timeout: 7500 ms 00:20:47.689 Doorbell Stride: 4 bytes 00:20:47.689 NVM Subsystem Reset: Not Supported 00:20:47.689 Command Sets Supported 00:20:47.689 NVM Command Set: Supported 00:20:47.689 Boot Partition: Not Supported 00:20:47.689 Memory Page Size Minimum: 4096 bytes 00:20:47.689 Memory Page Size Maximum: 65536 bytes 00:20:47.689 Persistent Memory Region: Not Supported 00:20:47.689 Optional Asynchronous Events Supported 00:20:47.689 Namespace Attribute Notices: Supported 00:20:47.689 Firmware Activation Notices: Not Supported 00:20:47.689 ANA Change Notices: Not Supported 00:20:47.689 PLE Aggregate Log Change Notices: Not Supported 00:20:47.689 LBA Status Info Alert Notices: Not Supported 00:20:47.689 EGE Aggregate Log Change Notices: Not Supported 00:20:47.689 Normal NVM Subsystem Shutdown event: Not Supported 00:20:47.689 Zone Descriptor Change Notices: Not Supported 00:20:47.689 Discovery Log Change Notices: Not Supported 00:20:47.689 Controller Attributes 00:20:47.689 128-bit Host Identifier: Not Supported 00:20:47.689 Non-Operational Permissive Mode: Not Supported 00:20:47.689 NVM Sets: Not Supported 00:20:47.689 Read Recovery Levels: Not Supported 00:20:47.689 Endurance Groups: Not Supported 00:20:47.689 Predictable Latency Mode: Not Supported 00:20:47.689 Traffic Based Keep ALive: Not Supported 00:20:47.689 Namespace Granularity: Not Supported 00:20:47.689 SQ Associations: Not Supported 00:20:47.689 UUID List: Not Supported 00:20:47.689 Multi-Domain Subsystem: Not Supported 00:20:47.689 Fixed Capacity Management: Not Supported 00:20:47.689 Variable Capacity Management: Not Supported 00:20:47.689 Delete Endurance Group: Not Supported 00:20:47.689 Delete NVM Set: Not Supported 00:20:47.689 Extended LBA Formats Supported: Supported 00:20:47.689 Flexible Data Placement Supported: Not Supported 00:20:47.689 00:20:47.689 Controller Memory Buffer Support 00:20:47.689 ================================ 00:20:47.689 Supported: No 00:20:47.689 00:20:47.689 Persistent Memory Region Support 00:20:47.689 ================================ 00:20:47.689 Supported: No 00:20:47.689 00:20:47.689 Admin Command Set Attributes 00:20:47.689 ============================ 00:20:47.689 Security Send/Receive: Not Supported 00:20:47.689 Format NVM: Supported 00:20:47.689 Firmware Activate/Download: Not Supported 00:20:47.689 Namespace Management: Supported 00:20:47.689 Device Self-Test: Not Supported 00:20:47.689 Directives: Supported 00:20:47.689 NVMe-MI: Not Supported 00:20:47.689 Virtualization Management: Not Supported 00:20:47.689 Doorbell Buffer Config: Supported 00:20:47.689 Get LBA Status Capability: Not Supported 00:20:47.689 Command & Feature Lockdown Capability: Not Supported 00:20:47.689 Abort Command Limit: 4 00:20:47.689 Async Event Request Limit: 4 00:20:47.689 Number of Firmware Slots: N/A 00:20:47.689 Firmware Slot 1 Read-Only: N/A 00:20:47.689 Firmware Activation Without Reset: N/A 00:20:47.689 Multiple Update Detection Support: N/A 00:20:47.689 Firmware Update Granularity: No Information Provided 00:20:47.689 Per-Namespace SMART Log: Yes 00:20:47.689 Asymmetric Namespace Access Log Page: Not Supported 00:20:47.689 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:20:47.689 Command Effects Log Page: Supported 00:20:47.689 Get Log Page Extended Data: Supported 00:20:47.689 Telemetry Log Pages: Not Supported 00:20:47.689 Persistent Event Log Pages: Not Supported 00:20:47.689 Supported Log Pages Log Page: May Support 00:20:47.689 Commands Supported & Effects Log Page: Not Supported 00:20:47.689 Feature Identifiers & Effects Log Page:May Support 00:20:47.689 NVMe-MI Commands & Effects Log Page: May Support 00:20:47.689 Data Area 4 for Telemetry Log: Not Supported 00:20:47.689 Error Log Page Entries Supported: 1 00:20:47.689 Keep Alive: Not Supported 00:20:47.689 00:20:47.689 NVM Command Set Attributes 00:20:47.689 ========================== 00:20:47.689 Submission Queue Entry Size 00:20:47.689 Max: 64 00:20:47.689 Min: 64 00:20:47.689 Completion Queue Entry Size 00:20:47.689 Max: 16 00:20:47.689 Min: 16 00:20:47.689 Number of Namespaces: 256 00:20:47.689 Compare Command: Supported 00:20:47.689 Write Uncorrectable Command: Not Supported 00:20:47.689 Dataset Management Command: Supported 00:20:47.689 Write Zeroes Command: Supported 00:20:47.689 Set Features Save Field: Supported 00:20:47.689 Reservations: Not Supported 00:20:47.689 Timestamp: Supported 00:20:47.689 Copy: Supported 00:20:47.689 Volatile Write Cache: Present 00:20:47.689 Atomic Write Unit (Normal): 1 00:20:47.689 Atomic Write Unit (PFail): 1 00:20:47.689 Atomic Compare & Write Unit: 1 00:20:47.689 Fused Compare & Write: Not Supported 00:20:47.689 Scatter-Gather List 00:20:47.689 SGL Command Set: Supported 00:20:47.689 SGL Keyed: Not Supported 00:20:47.689 SGL Bit Bucket Descriptor: Not Supported 00:20:47.689 SGL Metadata Pointer: Not Supported 00:20:47.689 Oversized SGL: Not Supported 00:20:47.689 SGL Metadata Address: Not Supported 00:20:47.689 SGL Offset: Not Supported 00:20:47.689 Transport SGL Data Block: Not Supported 00:20:47.689 Replay Protected Memory Block: Not Supported 00:20:47.689 00:20:47.689 Firmware Slot Information 00:20:47.689 ========================= 00:20:47.689 Active slot: 1 00:20:47.689 Slot 1 Firmware Revision: 1.0 00:20:47.689 00:20:47.689 00:20:47.689 Commands Supported and Effects 00:20:47.689 ============================== 00:20:47.689 Admin Commands 00:20:47.689 -------------- 00:20:47.689 Delete I/O Submission Queue (00h): Supported 00:20:47.689 Create I/O Submission Queue (01h): Supported 00:20:47.689 Get Log Page (02h): Supported 00:20:47.689 Delete I/O Completion Queue (04h): Supported 00:20:47.689 Create I/O Completion Queue (05h): Supported 00:20:47.689 Identify (06h): Supported 00:20:47.689 Abort (08h): Supported 00:20:47.689 Set Features (09h): Supported 00:20:47.689 Get Features (0Ah): Supported 00:20:47.689 Asynchronous Event Request (0Ch): Supported 00:20:47.689 Namespace Attachment (15h): Supported NS-Inventory-Change 00:20:47.689 Directive Send (19h): Supported 00:20:47.689 Directive Receive (1Ah): Supported 00:20:47.689 Virtualization Management (1Ch): Supported 00:20:47.689 Doorbell Buffer Config (7Ch): Supported 00:20:47.689 Format NVM (80h): Supported LBA-Change 00:20:47.689 I/O Commands 00:20:47.689 ------------ 00:20:47.689 Flush (00h): Supported LBA-Change 00:20:47.689 Write (01h): Supported LBA-Change 00:20:47.689 Read (02h): Supported 00:20:47.689 Compare (05h): Supported 00:20:47.689 Write Zeroes (08h): Supported LBA-Change 00:20:47.689 Dataset Management (09h): Supported LBA-Change 00:20:47.689 Unknown (0Ch): Supported 00:20:47.689 Unknown (12h): Supported 00:20:47.689 Copy (19h): Supported LBA-Change 00:20:47.689 Unknown (1Dh): Supported LBA-Change 00:20:47.689 00:20:47.689 Error Log 00:20:47.689 ========= 00:20:47.689 00:20:47.689 Arbitration 00:20:47.689 =========== 00:20:47.689 Arbitration Burst: no limit 00:20:47.689 00:20:47.689 Power Management 00:20:47.689 ================ 00:20:47.689 Number of Power States: 1 00:20:47.689 Current Power State: Power State #0 00:20:47.689 Power State #0: 00:20:47.689 Max Power: 25.00 W 00:20:47.689 Non-Operational State: Operational 00:20:47.689 Entry Latency: 16 microseconds 00:20:47.689 Exit Latency: 4 microseconds 00:20:47.689 Relative Read Throughput: 0 00:20:47.689 Relative Read Latency: 0 00:20:47.689 Relative Write Throughput: 0 00:20:47.689 Relative Write Latency: 0 00:20:47.689 Idle Power: Not Reported 00:20:47.689 Active Power: Not Reported 00:20:47.689 Non-Operational Permissive Mode: Not Supported 00:20:47.689 00:20:47.689 Health Information 00:20:47.690 ================== 00:20:47.690 Critical Warnings: 00:20:47.690 Available Spare Space: OK 00:20:47.690 Temperature: OK 00:20:47.690 Device Reliability: OK 00:20:47.690 Read Only: No 00:20:47.690 Volatile Memory Backup: OK 00:20:47.690 Current Temperature: 323 Kelvin (50 Celsius) 00:20:47.690 Temperature Threshold: 343 Kelvin (70 Celsius) 00:20:47.690 Available Spare: 0% 00:20:47.690 Available Spare Threshold: 0% 00:20:47.690 Life Percentage Used: 0% 00:20:47.690 Data Units Read: 815 00:20:47.690 Data Units Written: 743 00:20:47.690 Host Read Commands: 37407 00:20:47.690 Host Write Commands: 37193 00:20:47.690 Controller Busy Time: 0 minutes 00:20:47.690 Power Cycles: 0 00:20:47.690 Power On Hours: 0 hours 00:20:47.690 Unsafe Shutdowns: 0 00:20:47.690 Unrecoverable Media Errors: 0 00:20:47.690 Lifetime Error Log Entries: 0 00:20:47.690 Warning Temperature Time: 0 minutes 00:20:47.690 Critical Temperature Time: 0 minutes 00:20:47.690 00:20:47.690 Number of Queues 00:20:47.690 ================ 00:20:47.690 Number of I/O Submission Queues: 64 00:20:47.690 Number of I/O Completion Queues: 64 00:20:47.690 00:20:47.690 ZNS Specific Controller Data 00:20:47.690 ============================ 00:20:47.690 Zone Append Size Limit: 0 00:20:47.690 00:20:47.690 00:20:47.690 Active Namespaces 00:20:47.690 ================= 00:20:47.690 Namespace ID:1 00:20:47.690 Error Recovery Timeout: Unlimited 00:20:47.690 Command Set Identifier: NVM (00h) 00:20:47.690 Deallocate: Supported 00:20:47.690 Deallocated/Unwritten Error: Supported 00:20:47.690 Deallocated Read Value: All 0x00 00:20:47.690 Deallocate in Write Zeroes: Not Supported 00:20:47.690 Deallocated Guard Field: 0xFFFF 00:20:47.690 Flush: Supported 00:20:47.690 Reservation: Not Supported 00:20:47.690 Metadata Transferred as: Separate Metadata Buffer 00:20:47.690 Namespace Sharing Capabilities: Private 00:20:47.690 Size (in LBAs): 1548666 (5GiB) 00:20:47.690 Capacity (in LBAs): 1548666 (5GiB) 00:20:47.690 Utilization (in LBAs): 1548666 (5GiB) 00:20:47.690 Thin Provisioning: Not Supported 00:20:47.690 Per-NS Atomic Units: No 00:20:47.690 Maximum Single Source Range Length: 128 00:20:47.690 Maximum Copy Length: 128 00:20:47.690 Maximum Source Range Count: 128 00:20:47.690 NGUID/EUI64 Never Reused: No 00:20:47.690 Namespace Write Protected: No 00:20:47.690 Number of LBA Formats: 8 00:20:47.690 Current LBA Format: LBA Format #07 00:20:47.690 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:47.690 LBA Format #01: Data Size: 512 Metadata Size: 8 00:20:47.690 LBA Format #02: Data Size: 512 Metadata Size: 16 00:20:47.690 LBA Format #03: Data Size: 512 Metadata Size: 64 00:20:47.690 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:20:47.690 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:20:47.690 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:20:47.690 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:20:47.690 00:20:47.690 NVM Specific Namespace Data 00:20:47.690 =========================== 00:20:47.690 Logical Block Storage Tag Mask: 0 00:20:47.690 Protection Information Capabilities: 00:20:47.690 16b Guard Protection Information Storage Tag Support: No 00:20:47.690 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:20:47.690 Storage Tag Check Read Support: No 00:20:47.690 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.690 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.690 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.690 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.690 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.690 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.690 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.690 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.690 16:35:23 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:20:47.690 16:35:23 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:20:47.975 ===================================================== 00:20:47.975 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:47.975 ===================================================== 00:20:47.975 Controller Capabilities/Features 00:20:47.975 ================================ 00:20:47.975 Vendor ID: 1b36 00:20:47.975 Subsystem Vendor ID: 1af4 00:20:47.975 Serial Number: 12341 00:20:47.975 Model Number: QEMU NVMe Ctrl 00:20:47.975 Firmware Version: 8.0.0 00:20:47.975 Recommended Arb Burst: 6 00:20:47.975 IEEE OUI Identifier: 00 54 52 00:20:47.975 Multi-path I/O 00:20:47.975 May have multiple subsystem ports: No 00:20:47.975 May have multiple controllers: No 00:20:47.975 Associated with SR-IOV VF: No 00:20:47.975 Max Data Transfer Size: 524288 00:20:47.975 Max Number of Namespaces: 256 00:20:47.975 Max Number of I/O Queues: 64 00:20:47.975 NVMe Specification Version (VS): 1.4 00:20:47.975 NVMe Specification Version (Identify): 1.4 00:20:47.975 Maximum Queue Entries: 2048 00:20:47.975 Contiguous Queues Required: Yes 00:20:47.975 Arbitration Mechanisms Supported 00:20:47.975 Weighted Round Robin: Not Supported 00:20:47.975 Vendor Specific: Not Supported 00:20:47.975 Reset Timeout: 7500 ms 00:20:47.975 Doorbell Stride: 4 bytes 00:20:47.975 NVM Subsystem Reset: Not Supported 00:20:47.975 Command Sets Supported 00:20:47.975 NVM Command Set: Supported 00:20:47.975 Boot Partition: Not Supported 00:20:47.975 Memory Page Size Minimum: 4096 bytes 00:20:47.975 Memory Page Size Maximum: 65536 bytes 00:20:47.975 Persistent Memory Region: Not Supported 00:20:47.975 Optional Asynchronous Events Supported 00:20:47.975 Namespace Attribute Notices: Supported 00:20:47.975 Firmware Activation Notices: Not Supported 00:20:47.975 ANA Change Notices: Not Supported 00:20:47.975 PLE Aggregate Log Change Notices: Not Supported 00:20:47.975 LBA Status Info Alert Notices: Not Supported 00:20:47.975 EGE Aggregate Log Change Notices: Not Supported 00:20:47.975 Normal NVM Subsystem Shutdown event: Not Supported 00:20:47.975 Zone Descriptor Change Notices: Not Supported 00:20:47.975 Discovery Log Change Notices: Not Supported 00:20:47.975 Controller Attributes 00:20:47.975 128-bit Host Identifier: Not Supported 00:20:47.975 Non-Operational Permissive Mode: Not Supported 00:20:47.975 NVM Sets: Not Supported 00:20:47.975 Read Recovery Levels: Not Supported 00:20:47.975 Endurance Groups: Not Supported 00:20:47.975 Predictable Latency Mode: Not Supported 00:20:47.975 Traffic Based Keep ALive: Not Supported 00:20:47.975 Namespace Granularity: Not Supported 00:20:47.975 SQ Associations: Not Supported 00:20:47.975 UUID List: Not Supported 00:20:47.975 Multi-Domain Subsystem: Not Supported 00:20:47.975 Fixed Capacity Management: Not Supported 00:20:47.975 Variable Capacity Management: Not Supported 00:20:47.975 Delete Endurance Group: Not Supported 00:20:47.975 Delete NVM Set: Not Supported 00:20:47.975 Extended LBA Formats Supported: Supported 00:20:47.975 Flexible Data Placement Supported: Not Supported 00:20:47.975 00:20:47.975 Controller Memory Buffer Support 00:20:47.975 ================================ 00:20:47.975 Supported: No 00:20:47.975 00:20:47.975 Persistent Memory Region Support 00:20:47.975 ================================ 00:20:47.975 Supported: No 00:20:47.975 00:20:47.975 Admin Command Set Attributes 00:20:47.975 ============================ 00:20:47.975 Security Send/Receive: Not Supported 00:20:47.975 Format NVM: Supported 00:20:47.975 Firmware Activate/Download: Not Supported 00:20:47.975 Namespace Management: Supported 00:20:47.975 Device Self-Test: Not Supported 00:20:47.975 Directives: Supported 00:20:47.975 NVMe-MI: Not Supported 00:20:47.975 Virtualization Management: Not Supported 00:20:47.975 Doorbell Buffer Config: Supported 00:20:47.975 Get LBA Status Capability: Not Supported 00:20:47.975 Command & Feature Lockdown Capability: Not Supported 00:20:47.975 Abort Command Limit: 4 00:20:47.975 Async Event Request Limit: 4 00:20:47.975 Number of Firmware Slots: N/A 00:20:47.975 Firmware Slot 1 Read-Only: N/A 00:20:47.975 Firmware Activation Without Reset: N/A 00:20:47.975 Multiple Update Detection Support: N/A 00:20:47.975 Firmware Update Granularity: No Information Provided 00:20:47.975 Per-Namespace SMART Log: Yes 00:20:47.975 Asymmetric Namespace Access Log Page: Not Supported 00:20:47.975 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:20:47.975 Command Effects Log Page: Supported 00:20:47.975 Get Log Page Extended Data: Supported 00:20:47.975 Telemetry Log Pages: Not Supported 00:20:47.975 Persistent Event Log Pages: Not Supported 00:20:47.975 Supported Log Pages Log Page: May Support 00:20:47.976 Commands Supported & Effects Log Page: Not Supported 00:20:47.976 Feature Identifiers & Effects Log Page:May Support 00:20:47.976 NVMe-MI Commands & Effects Log Page: May Support 00:20:47.976 Data Area 4 for Telemetry Log: Not Supported 00:20:47.976 Error Log Page Entries Supported: 1 00:20:47.976 Keep Alive: Not Supported 00:20:47.976 00:20:47.976 NVM Command Set Attributes 00:20:47.976 ========================== 00:20:47.976 Submission Queue Entry Size 00:20:47.976 Max: 64 00:20:47.976 Min: 64 00:20:47.976 Completion Queue Entry Size 00:20:47.976 Max: 16 00:20:47.976 Min: 16 00:20:47.976 Number of Namespaces: 256 00:20:47.976 Compare Command: Supported 00:20:47.976 Write Uncorrectable Command: Not Supported 00:20:47.976 Dataset Management Command: Supported 00:20:47.976 Write Zeroes Command: Supported 00:20:47.976 Set Features Save Field: Supported 00:20:47.976 Reservations: Not Supported 00:20:47.976 Timestamp: Supported 00:20:47.976 Copy: Supported 00:20:47.976 Volatile Write Cache: Present 00:20:47.976 Atomic Write Unit (Normal): 1 00:20:47.976 Atomic Write Unit (PFail): 1 00:20:47.976 Atomic Compare & Write Unit: 1 00:20:47.976 Fused Compare & Write: Not Supported 00:20:47.976 Scatter-Gather List 00:20:47.976 SGL Command Set: Supported 00:20:47.976 SGL Keyed: Not Supported 00:20:47.976 SGL Bit Bucket Descriptor: Not Supported 00:20:47.976 SGL Metadata Pointer: Not Supported 00:20:47.976 Oversized SGL: Not Supported 00:20:47.976 SGL Metadata Address: Not Supported 00:20:47.976 SGL Offset: Not Supported 00:20:47.976 Transport SGL Data Block: Not Supported 00:20:47.976 Replay Protected Memory Block: Not Supported 00:20:47.976 00:20:47.976 Firmware Slot Information 00:20:47.976 ========================= 00:20:47.976 Active slot: 1 00:20:47.976 Slot 1 Firmware Revision: 1.0 00:20:47.976 00:20:47.976 00:20:47.976 Commands Supported and Effects 00:20:47.976 ============================== 00:20:47.976 Admin Commands 00:20:47.976 -------------- 00:20:47.976 Delete I/O Submission Queue (00h): Supported 00:20:47.976 Create I/O Submission Queue (01h): Supported 00:20:47.976 Get Log Page (02h): Supported 00:20:47.976 Delete I/O Completion Queue (04h): Supported 00:20:47.976 Create I/O Completion Queue (05h): Supported 00:20:47.976 Identify (06h): Supported 00:20:47.976 Abort (08h): Supported 00:20:47.976 Set Features (09h): Supported 00:20:47.976 Get Features (0Ah): Supported 00:20:47.976 Asynchronous Event Request (0Ch): Supported 00:20:47.976 Namespace Attachment (15h): Supported NS-Inventory-Change 00:20:47.976 Directive Send (19h): Supported 00:20:47.976 Directive Receive (1Ah): Supported 00:20:47.976 Virtualization Management (1Ch): Supported 00:20:47.976 Doorbell Buffer Config (7Ch): Supported 00:20:47.976 Format NVM (80h): Supported LBA-Change 00:20:47.976 I/O Commands 00:20:47.976 ------------ 00:20:47.976 Flush (00h): Supported LBA-Change 00:20:47.976 Write (01h): Supported LBA-Change 00:20:47.976 Read (02h): Supported 00:20:47.976 Compare (05h): Supported 00:20:47.976 Write Zeroes (08h): Supported LBA-Change 00:20:47.976 Dataset Management (09h): Supported LBA-Change 00:20:47.976 Unknown (0Ch): Supported 00:20:47.976 Unknown (12h): Supported 00:20:47.976 Copy (19h): Supported LBA-Change 00:20:47.976 Unknown (1Dh): Supported LBA-Change 00:20:47.976 00:20:47.976 Error Log 00:20:47.976 ========= 00:20:47.976 00:20:47.976 Arbitration 00:20:47.976 =========== 00:20:47.976 Arbitration Burst: no limit 00:20:47.976 00:20:47.976 Power Management 00:20:47.976 ================ 00:20:47.976 Number of Power States: 1 00:20:47.976 Current Power State: Power State #0 00:20:47.976 Power State #0: 00:20:47.976 Max Power: 25.00 W 00:20:47.976 Non-Operational State: Operational 00:20:47.976 Entry Latency: 16 microseconds 00:20:47.976 Exit Latency: 4 microseconds 00:20:47.976 Relative Read Throughput: 0 00:20:47.976 Relative Read Latency: 0 00:20:47.976 Relative Write Throughput: 0 00:20:47.976 Relative Write Latency: 0 00:20:47.976 Idle Power: Not Reported 00:20:47.976 Active Power: Not Reported 00:20:47.976 Non-Operational Permissive Mode: Not Supported 00:20:47.976 00:20:47.976 Health Information 00:20:47.976 ================== 00:20:47.976 Critical Warnings: 00:20:47.976 Available Spare Space: OK 00:20:47.976 Temperature: OK 00:20:47.976 Device Reliability: OK 00:20:47.976 Read Only: No 00:20:47.976 Volatile Memory Backup: OK 00:20:47.976 Current Temperature: 323 Kelvin (50 Celsius) 00:20:47.976 Temperature Threshold: 343 Kelvin (70 Celsius) 00:20:47.976 Available Spare: 0% 00:20:47.976 Available Spare Threshold: 0% 00:20:47.976 Life Percentage Used: 0% 00:20:47.976 Data Units Read: 1178 00:20:47.976 Data Units Written: 1045 00:20:47.976 Host Read Commands: 54820 00:20:47.976 Host Write Commands: 53595 00:20:47.976 Controller Busy Time: 0 minutes 00:20:47.976 Power Cycles: 0 00:20:47.976 Power On Hours: 0 hours 00:20:47.976 Unsafe Shutdowns: 0 00:20:47.976 Unrecoverable Media Errors: 0 00:20:47.976 Lifetime Error Log Entries: 0 00:20:47.976 Warning Temperature Time: 0 minutes 00:20:47.976 Critical Temperature Time: 0 minutes 00:20:47.976 00:20:47.976 Number of Queues 00:20:47.976 ================ 00:20:47.976 Number of I/O Submission Queues: 64 00:20:47.976 Number of I/O Completion Queues: 64 00:20:47.976 00:20:47.976 ZNS Specific Controller Data 00:20:47.976 ============================ 00:20:47.976 Zone Append Size Limit: 0 00:20:47.976 00:20:47.976 00:20:47.976 Active Namespaces 00:20:47.976 ================= 00:20:47.976 Namespace ID:1 00:20:47.976 Error Recovery Timeout: Unlimited 00:20:47.976 Command Set Identifier: NVM (00h) 00:20:47.976 Deallocate: Supported 00:20:47.976 Deallocated/Unwritten Error: Supported 00:20:47.976 Deallocated Read Value: All 0x00 00:20:47.976 Deallocate in Write Zeroes: Not Supported 00:20:47.976 Deallocated Guard Field: 0xFFFF 00:20:47.976 Flush: Supported 00:20:47.976 Reservation: Not Supported 00:20:47.976 Namespace Sharing Capabilities: Private 00:20:47.976 Size (in LBAs): 1310720 (5GiB) 00:20:47.976 Capacity (in LBAs): 1310720 (5GiB) 00:20:47.976 Utilization (in LBAs): 1310720 (5GiB) 00:20:47.976 Thin Provisioning: Not Supported 00:20:47.976 Per-NS Atomic Units: No 00:20:47.976 Maximum Single Source Range Length: 128 00:20:47.976 Maximum Copy Length: 128 00:20:47.976 Maximum Source Range Count: 128 00:20:47.976 NGUID/EUI64 Never Reused: No 00:20:47.976 Namespace Write Protected: No 00:20:47.976 Number of LBA Formats: 8 00:20:47.976 Current LBA Format: LBA Format #04 00:20:47.976 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:47.976 LBA Format #01: Data Size: 512 Metadata Size: 8 00:20:47.976 LBA Format #02: Data Size: 512 Metadata Size: 16 00:20:47.976 LBA Format #03: Data Size: 512 Metadata Size: 64 00:20:47.976 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:20:47.976 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:20:47.976 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:20:47.976 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:20:47.976 00:20:47.976 NVM Specific Namespace Data 00:20:47.976 =========================== 00:20:47.976 Logical Block Storage Tag Mask: 0 00:20:47.976 Protection Information Capabilities: 00:20:47.976 16b Guard Protection Information Storage Tag Support: No 00:20:47.976 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:20:47.976 Storage Tag Check Read Support: No 00:20:47.976 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.976 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.976 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.976 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.976 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.976 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.976 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.976 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:47.976 16:35:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:20:47.976 16:35:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:20:48.248 ===================================================== 00:20:48.248 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:48.248 ===================================================== 00:20:48.248 Controller Capabilities/Features 00:20:48.248 ================================ 00:20:48.248 Vendor ID: 1b36 00:20:48.248 Subsystem Vendor ID: 1af4 00:20:48.248 Serial Number: 12342 00:20:48.248 Model Number: QEMU NVMe Ctrl 00:20:48.248 Firmware Version: 8.0.0 00:20:48.248 Recommended Arb Burst: 6 00:20:48.248 IEEE OUI Identifier: 00 54 52 00:20:48.248 Multi-path I/O 00:20:48.248 May have multiple subsystem ports: No 00:20:48.248 May have multiple controllers: No 00:20:48.248 Associated with SR-IOV VF: No 00:20:48.248 Max Data Transfer Size: 524288 00:20:48.248 Max Number of Namespaces: 256 00:20:48.248 Max Number of I/O Queues: 64 00:20:48.248 NVMe Specification Version (VS): 1.4 00:20:48.248 NVMe Specification Version (Identify): 1.4 00:20:48.248 Maximum Queue Entries: 2048 00:20:48.248 Contiguous Queues Required: Yes 00:20:48.248 Arbitration Mechanisms Supported 00:20:48.248 Weighted Round Robin: Not Supported 00:20:48.248 Vendor Specific: Not Supported 00:20:48.248 Reset Timeout: 7500 ms 00:20:48.248 Doorbell Stride: 4 bytes 00:20:48.248 NVM Subsystem Reset: Not Supported 00:20:48.248 Command Sets Supported 00:20:48.248 NVM Command Set: Supported 00:20:48.248 Boot Partition: Not Supported 00:20:48.248 Memory Page Size Minimum: 4096 bytes 00:20:48.248 Memory Page Size Maximum: 65536 bytes 00:20:48.248 Persistent Memory Region: Not Supported 00:20:48.248 Optional Asynchronous Events Supported 00:20:48.248 Namespace Attribute Notices: Supported 00:20:48.248 Firmware Activation Notices: Not Supported 00:20:48.248 ANA Change Notices: Not Supported 00:20:48.248 PLE Aggregate Log Change Notices: Not Supported 00:20:48.248 LBA Status Info Alert Notices: Not Supported 00:20:48.248 EGE Aggregate Log Change Notices: Not Supported 00:20:48.248 Normal NVM Subsystem Shutdown event: Not Supported 00:20:48.248 Zone Descriptor Change Notices: Not Supported 00:20:48.248 Discovery Log Change Notices: Not Supported 00:20:48.248 Controller Attributes 00:20:48.248 128-bit Host Identifier: Not Supported 00:20:48.248 Non-Operational Permissive Mode: Not Supported 00:20:48.248 NVM Sets: Not Supported 00:20:48.248 Read Recovery Levels: Not Supported 00:20:48.248 Endurance Groups: Not Supported 00:20:48.248 Predictable Latency Mode: Not Supported 00:20:48.248 Traffic Based Keep ALive: Not Supported 00:20:48.248 Namespace Granularity: Not Supported 00:20:48.248 SQ Associations: Not Supported 00:20:48.248 UUID List: Not Supported 00:20:48.248 Multi-Domain Subsystem: Not Supported 00:20:48.248 Fixed Capacity Management: Not Supported 00:20:48.248 Variable Capacity Management: Not Supported 00:20:48.248 Delete Endurance Group: Not Supported 00:20:48.248 Delete NVM Set: Not Supported 00:20:48.248 Extended LBA Formats Supported: Supported 00:20:48.248 Flexible Data Placement Supported: Not Supported 00:20:48.248 00:20:48.248 Controller Memory Buffer Support 00:20:48.248 ================================ 00:20:48.248 Supported: No 00:20:48.248 00:20:48.248 Persistent Memory Region Support 00:20:48.248 ================================ 00:20:48.248 Supported: No 00:20:48.248 00:20:48.248 Admin Command Set Attributes 00:20:48.248 ============================ 00:20:48.248 Security Send/Receive: Not Supported 00:20:48.248 Format NVM: Supported 00:20:48.248 Firmware Activate/Download: Not Supported 00:20:48.248 Namespace Management: Supported 00:20:48.248 Device Self-Test: Not Supported 00:20:48.248 Directives: Supported 00:20:48.248 NVMe-MI: Not Supported 00:20:48.248 Virtualization Management: Not Supported 00:20:48.248 Doorbell Buffer Config: Supported 00:20:48.248 Get LBA Status Capability: Not Supported 00:20:48.248 Command & Feature Lockdown Capability: Not Supported 00:20:48.248 Abort Command Limit: 4 00:20:48.248 Async Event Request Limit: 4 00:20:48.248 Number of Firmware Slots: N/A 00:20:48.248 Firmware Slot 1 Read-Only: N/A 00:20:48.248 Firmware Activation Without Reset: N/A 00:20:48.248 Multiple Update Detection Support: N/A 00:20:48.248 Firmware Update Granularity: No Information Provided 00:20:48.248 Per-Namespace SMART Log: Yes 00:20:48.248 Asymmetric Namespace Access Log Page: Not Supported 00:20:48.248 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:20:48.248 Command Effects Log Page: Supported 00:20:48.248 Get Log Page Extended Data: Supported 00:20:48.248 Telemetry Log Pages: Not Supported 00:20:48.248 Persistent Event Log Pages: Not Supported 00:20:48.248 Supported Log Pages Log Page: May Support 00:20:48.248 Commands Supported & Effects Log Page: Not Supported 00:20:48.248 Feature Identifiers & Effects Log Page:May Support 00:20:48.248 NVMe-MI Commands & Effects Log Page: May Support 00:20:48.248 Data Area 4 for Telemetry Log: Not Supported 00:20:48.248 Error Log Page Entries Supported: 1 00:20:48.248 Keep Alive: Not Supported 00:20:48.248 00:20:48.248 NVM Command Set Attributes 00:20:48.248 ========================== 00:20:48.248 Submission Queue Entry Size 00:20:48.248 Max: 64 00:20:48.248 Min: 64 00:20:48.248 Completion Queue Entry Size 00:20:48.248 Max: 16 00:20:48.248 Min: 16 00:20:48.248 Number of Namespaces: 256 00:20:48.248 Compare Command: Supported 00:20:48.248 Write Uncorrectable Command: Not Supported 00:20:48.248 Dataset Management Command: Supported 00:20:48.248 Write Zeroes Command: Supported 00:20:48.248 Set Features Save Field: Supported 00:20:48.248 Reservations: Not Supported 00:20:48.248 Timestamp: Supported 00:20:48.248 Copy: Supported 00:20:48.248 Volatile Write Cache: Present 00:20:48.248 Atomic Write Unit (Normal): 1 00:20:48.248 Atomic Write Unit (PFail): 1 00:20:48.248 Atomic Compare & Write Unit: 1 00:20:48.248 Fused Compare & Write: Not Supported 00:20:48.248 Scatter-Gather List 00:20:48.248 SGL Command Set: Supported 00:20:48.248 SGL Keyed: Not Supported 00:20:48.248 SGL Bit Bucket Descriptor: Not Supported 00:20:48.248 SGL Metadata Pointer: Not Supported 00:20:48.248 Oversized SGL: Not Supported 00:20:48.248 SGL Metadata Address: Not Supported 00:20:48.248 SGL Offset: Not Supported 00:20:48.248 Transport SGL Data Block: Not Supported 00:20:48.248 Replay Protected Memory Block: Not Supported 00:20:48.248 00:20:48.248 Firmware Slot Information 00:20:48.248 ========================= 00:20:48.248 Active slot: 1 00:20:48.248 Slot 1 Firmware Revision: 1.0 00:20:48.248 00:20:48.248 00:20:48.248 Commands Supported and Effects 00:20:48.248 ============================== 00:20:48.248 Admin Commands 00:20:48.248 -------------- 00:20:48.248 Delete I/O Submission Queue (00h): Supported 00:20:48.248 Create I/O Submission Queue (01h): Supported 00:20:48.248 Get Log Page (02h): Supported 00:20:48.248 Delete I/O Completion Queue (04h): Supported 00:20:48.248 Create I/O Completion Queue (05h): Supported 00:20:48.248 Identify (06h): Supported 00:20:48.248 Abort (08h): Supported 00:20:48.248 Set Features (09h): Supported 00:20:48.248 Get Features (0Ah): Supported 00:20:48.249 Asynchronous Event Request (0Ch): Supported 00:20:48.249 Namespace Attachment (15h): Supported NS-Inventory-Change 00:20:48.249 Directive Send (19h): Supported 00:20:48.249 Directive Receive (1Ah): Supported 00:20:48.249 Virtualization Management (1Ch): Supported 00:20:48.249 Doorbell Buffer Config (7Ch): Supported 00:20:48.249 Format NVM (80h): Supported LBA-Change 00:20:48.249 I/O Commands 00:20:48.249 ------------ 00:20:48.249 Flush (00h): Supported LBA-Change 00:20:48.249 Write (01h): Supported LBA-Change 00:20:48.249 Read (02h): Supported 00:20:48.249 Compare (05h): Supported 00:20:48.249 Write Zeroes (08h): Supported LBA-Change 00:20:48.249 Dataset Management (09h): Supported LBA-Change 00:20:48.249 Unknown (0Ch): Supported 00:20:48.249 Unknown (12h): Supported 00:20:48.249 Copy (19h): Supported LBA-Change 00:20:48.249 Unknown (1Dh): Supported LBA-Change 00:20:48.249 00:20:48.249 Error Log 00:20:48.249 ========= 00:20:48.249 00:20:48.249 Arbitration 00:20:48.249 =========== 00:20:48.249 Arbitration Burst: no limit 00:20:48.249 00:20:48.249 Power Management 00:20:48.249 ================ 00:20:48.249 Number of Power States: 1 00:20:48.249 Current Power State: Power State #0 00:20:48.249 Power State #0: 00:20:48.249 Max Power: 25.00 W 00:20:48.249 Non-Operational State: Operational 00:20:48.249 Entry Latency: 16 microseconds 00:20:48.249 Exit Latency: 4 microseconds 00:20:48.249 Relative Read Throughput: 0 00:20:48.249 Relative Read Latency: 0 00:20:48.249 Relative Write Throughput: 0 00:20:48.249 Relative Write Latency: 0 00:20:48.249 Idle Power: Not Reported 00:20:48.249 Active Power: Not Reported 00:20:48.249 Non-Operational Permissive Mode: Not Supported 00:20:48.249 00:20:48.249 Health Information 00:20:48.249 ================== 00:20:48.249 Critical Warnings: 00:20:48.249 Available Spare Space: OK 00:20:48.249 Temperature: OK 00:20:48.249 Device Reliability: OK 00:20:48.249 Read Only: No 00:20:48.249 Volatile Memory Backup: OK 00:20:48.249 Current Temperature: 323 Kelvin (50 Celsius) 00:20:48.249 Temperature Threshold: 343 Kelvin (70 Celsius) 00:20:48.249 Available Spare: 0% 00:20:48.249 Available Spare Threshold: 0% 00:20:48.249 Life Percentage Used: 0% 00:20:48.249 Data Units Read: 2559 00:20:48.249 Data Units Written: 2346 00:20:48.249 Host Read Commands: 113949 00:20:48.249 Host Write Commands: 112218 00:20:48.249 Controller Busy Time: 0 minutes 00:20:48.249 Power Cycles: 0 00:20:48.249 Power On Hours: 0 hours 00:20:48.249 Unsafe Shutdowns: 0 00:20:48.249 Unrecoverable Media Errors: 0 00:20:48.249 Lifetime Error Log Entries: 0 00:20:48.249 Warning Temperature Time: 0 minutes 00:20:48.249 Critical Temperature Time: 0 minutes 00:20:48.249 00:20:48.249 Number of Queues 00:20:48.249 ================ 00:20:48.249 Number of I/O Submission Queues: 64 00:20:48.249 Number of I/O Completion Queues: 64 00:20:48.249 00:20:48.249 ZNS Specific Controller Data 00:20:48.249 ============================ 00:20:48.249 Zone Append Size Limit: 0 00:20:48.249 00:20:48.249 00:20:48.249 Active Namespaces 00:20:48.249 ================= 00:20:48.249 Namespace ID:1 00:20:48.249 Error Recovery Timeout: Unlimited 00:20:48.249 Command Set Identifier: NVM (00h) 00:20:48.249 Deallocate: Supported 00:20:48.249 Deallocated/Unwritten Error: Supported 00:20:48.249 Deallocated Read Value: All 0x00 00:20:48.249 Deallocate in Write Zeroes: Not Supported 00:20:48.249 Deallocated Guard Field: 0xFFFF 00:20:48.249 Flush: Supported 00:20:48.249 Reservation: Not Supported 00:20:48.249 Namespace Sharing Capabilities: Private 00:20:48.249 Size (in LBAs): 1048576 (4GiB) 00:20:48.249 Capacity (in LBAs): 1048576 (4GiB) 00:20:48.249 Utilization (in LBAs): 1048576 (4GiB) 00:20:48.249 Thin Provisioning: Not Supported 00:20:48.249 Per-NS Atomic Units: No 00:20:48.249 Maximum Single Source Range Length: 128 00:20:48.249 Maximum Copy Length: 128 00:20:48.249 Maximum Source Range Count: 128 00:20:48.249 NGUID/EUI64 Never Reused: No 00:20:48.249 Namespace Write Protected: No 00:20:48.249 Number of LBA Formats: 8 00:20:48.249 Current LBA Format: LBA Format #04 00:20:48.249 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:48.249 LBA Format #01: Data Size: 512 Metadata Size: 8 00:20:48.249 LBA Format #02: Data Size: 512 Metadata Size: 16 00:20:48.249 LBA Format #03: Data Size: 512 Metadata Size: 64 00:20:48.249 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:20:48.249 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:20:48.249 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:20:48.249 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:20:48.249 00:20:48.249 NVM Specific Namespace Data 00:20:48.249 =========================== 00:20:48.249 Logical Block Storage Tag Mask: 0 00:20:48.249 Protection Information Capabilities: 00:20:48.249 16b Guard Protection Information Storage Tag Support: No 00:20:48.249 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:20:48.249 Storage Tag Check Read Support: No 00:20:48.249 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Namespace ID:2 00:20:48.249 Error Recovery Timeout: Unlimited 00:20:48.249 Command Set Identifier: NVM (00h) 00:20:48.249 Deallocate: Supported 00:20:48.249 Deallocated/Unwritten Error: Supported 00:20:48.249 Deallocated Read Value: All 0x00 00:20:48.249 Deallocate in Write Zeroes: Not Supported 00:20:48.249 Deallocated Guard Field: 0xFFFF 00:20:48.249 Flush: Supported 00:20:48.249 Reservation: Not Supported 00:20:48.249 Namespace Sharing Capabilities: Private 00:20:48.249 Size (in LBAs): 1048576 (4GiB) 00:20:48.249 Capacity (in LBAs): 1048576 (4GiB) 00:20:48.249 Utilization (in LBAs): 1048576 (4GiB) 00:20:48.249 Thin Provisioning: Not Supported 00:20:48.249 Per-NS Atomic Units: No 00:20:48.249 Maximum Single Source Range Length: 128 00:20:48.249 Maximum Copy Length: 128 00:20:48.249 Maximum Source Range Count: 128 00:20:48.249 NGUID/EUI64 Never Reused: No 00:20:48.249 Namespace Write Protected: No 00:20:48.249 Number of LBA Formats: 8 00:20:48.249 Current LBA Format: LBA Format #04 00:20:48.249 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:48.249 LBA Format #01: Data Size: 512 Metadata Size: 8 00:20:48.249 LBA Format #02: Data Size: 512 Metadata Size: 16 00:20:48.249 LBA Format #03: Data Size: 512 Metadata Size: 64 00:20:48.249 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:20:48.249 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:20:48.249 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:20:48.249 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:20:48.249 00:20:48.249 NVM Specific Namespace Data 00:20:48.249 =========================== 00:20:48.249 Logical Block Storage Tag Mask: 0 00:20:48.249 Protection Information Capabilities: 00:20:48.249 16b Guard Protection Information Storage Tag Support: No 00:20:48.249 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:20:48.249 Storage Tag Check Read Support: No 00:20:48.249 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.249 Namespace ID:3 00:20:48.249 Error Recovery Timeout: Unlimited 00:20:48.249 Command Set Identifier: NVM (00h) 00:20:48.249 Deallocate: Supported 00:20:48.249 Deallocated/Unwritten Error: Supported 00:20:48.249 Deallocated Read Value: All 0x00 00:20:48.249 Deallocate in Write Zeroes: Not Supported 00:20:48.249 Deallocated Guard Field: 0xFFFF 00:20:48.249 Flush: Supported 00:20:48.249 Reservation: Not Supported 00:20:48.249 Namespace Sharing Capabilities: Private 00:20:48.249 Size (in LBAs): 1048576 (4GiB) 00:20:48.250 Capacity (in LBAs): 1048576 (4GiB) 00:20:48.250 Utilization (in LBAs): 1048576 (4GiB) 00:20:48.250 Thin Provisioning: Not Supported 00:20:48.250 Per-NS Atomic Units: No 00:20:48.250 Maximum Single Source Range Length: 128 00:20:48.250 Maximum Copy Length: 128 00:20:48.250 Maximum Source Range Count: 128 00:20:48.250 NGUID/EUI64 Never Reused: No 00:20:48.250 Namespace Write Protected: No 00:20:48.250 Number of LBA Formats: 8 00:20:48.250 Current LBA Format: LBA Format #04 00:20:48.250 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:48.250 LBA Format #01: Data Size: 512 Metadata Size: 8 00:20:48.250 LBA Format #02: Data Size: 512 Metadata Size: 16 00:20:48.250 LBA Format #03: Data Size: 512 Metadata Size: 64 00:20:48.250 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:20:48.250 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:20:48.250 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:20:48.250 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:20:48.250 00:20:48.250 NVM Specific Namespace Data 00:20:48.250 =========================== 00:20:48.250 Logical Block Storage Tag Mask: 0 00:20:48.250 Protection Information Capabilities: 00:20:48.250 16b Guard Protection Information Storage Tag Support: No 00:20:48.250 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:20:48.250 Storage Tag Check Read Support: No 00:20:48.250 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.250 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.250 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.250 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.250 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.250 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.250 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.250 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.250 16:35:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:20:48.250 16:35:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:20:48.509 ===================================================== 00:20:48.509 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:48.509 ===================================================== 00:20:48.509 Controller Capabilities/Features 00:20:48.509 ================================ 00:20:48.509 Vendor ID: 1b36 00:20:48.509 Subsystem Vendor ID: 1af4 00:20:48.509 Serial Number: 12343 00:20:48.509 Model Number: QEMU NVMe Ctrl 00:20:48.509 Firmware Version: 8.0.0 00:20:48.509 Recommended Arb Burst: 6 00:20:48.509 IEEE OUI Identifier: 00 54 52 00:20:48.509 Multi-path I/O 00:20:48.509 May have multiple subsystem ports: No 00:20:48.509 May have multiple controllers: Yes 00:20:48.509 Associated with SR-IOV VF: No 00:20:48.509 Max Data Transfer Size: 524288 00:20:48.509 Max Number of Namespaces: 256 00:20:48.509 Max Number of I/O Queues: 64 00:20:48.509 NVMe Specification Version (VS): 1.4 00:20:48.509 NVMe Specification Version (Identify): 1.4 00:20:48.509 Maximum Queue Entries: 2048 00:20:48.509 Contiguous Queues Required: Yes 00:20:48.509 Arbitration Mechanisms Supported 00:20:48.509 Weighted Round Robin: Not Supported 00:20:48.509 Vendor Specific: Not Supported 00:20:48.509 Reset Timeout: 7500 ms 00:20:48.509 Doorbell Stride: 4 bytes 00:20:48.509 NVM Subsystem Reset: Not Supported 00:20:48.509 Command Sets Supported 00:20:48.509 NVM Command Set: Supported 00:20:48.509 Boot Partition: Not Supported 00:20:48.509 Memory Page Size Minimum: 4096 bytes 00:20:48.509 Memory Page Size Maximum: 65536 bytes 00:20:48.509 Persistent Memory Region: Not Supported 00:20:48.509 Optional Asynchronous Events Supported 00:20:48.509 Namespace Attribute Notices: Supported 00:20:48.509 Firmware Activation Notices: Not Supported 00:20:48.509 ANA Change Notices: Not Supported 00:20:48.509 PLE Aggregate Log Change Notices: Not Supported 00:20:48.509 LBA Status Info Alert Notices: Not Supported 00:20:48.509 EGE Aggregate Log Change Notices: Not Supported 00:20:48.509 Normal NVM Subsystem Shutdown event: Not Supported 00:20:48.509 Zone Descriptor Change Notices: Not Supported 00:20:48.509 Discovery Log Change Notices: Not Supported 00:20:48.509 Controller Attributes 00:20:48.509 128-bit Host Identifier: Not Supported 00:20:48.509 Non-Operational Permissive Mode: Not Supported 00:20:48.509 NVM Sets: Not Supported 00:20:48.509 Read Recovery Levels: Not Supported 00:20:48.509 Endurance Groups: Supported 00:20:48.509 Predictable Latency Mode: Not Supported 00:20:48.509 Traffic Based Keep ALive: Not Supported 00:20:48.509 Namespace Granularity: Not Supported 00:20:48.509 SQ Associations: Not Supported 00:20:48.509 UUID List: Not Supported 00:20:48.509 Multi-Domain Subsystem: Not Supported 00:20:48.509 Fixed Capacity Management: Not Supported 00:20:48.509 Variable Capacity Management: Not Supported 00:20:48.509 Delete Endurance Group: Not Supported 00:20:48.509 Delete NVM Set: Not Supported 00:20:48.509 Extended LBA Formats Supported: Supported 00:20:48.509 Flexible Data Placement Supported: Supported 00:20:48.509 00:20:48.509 Controller Memory Buffer Support 00:20:48.509 ================================ 00:20:48.509 Supported: No 00:20:48.509 00:20:48.509 Persistent Memory Region Support 00:20:48.509 ================================ 00:20:48.509 Supported: No 00:20:48.509 00:20:48.509 Admin Command Set Attributes 00:20:48.509 ============================ 00:20:48.509 Security Send/Receive: Not Supported 00:20:48.509 Format NVM: Supported 00:20:48.509 Firmware Activate/Download: Not Supported 00:20:48.509 Namespace Management: Supported 00:20:48.509 Device Self-Test: Not Supported 00:20:48.509 Directives: Supported 00:20:48.509 NVMe-MI: Not Supported 00:20:48.509 Virtualization Management: Not Supported 00:20:48.509 Doorbell Buffer Config: Supported 00:20:48.509 Get LBA Status Capability: Not Supported 00:20:48.509 Command & Feature Lockdown Capability: Not Supported 00:20:48.509 Abort Command Limit: 4 00:20:48.509 Async Event Request Limit: 4 00:20:48.509 Number of Firmware Slots: N/A 00:20:48.509 Firmware Slot 1 Read-Only: N/A 00:20:48.509 Firmware Activation Without Reset: N/A 00:20:48.509 Multiple Update Detection Support: N/A 00:20:48.509 Firmware Update Granularity: No Information Provided 00:20:48.509 Per-Namespace SMART Log: Yes 00:20:48.509 Asymmetric Namespace Access Log Page: Not Supported 00:20:48.509 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:20:48.509 Command Effects Log Page: Supported 00:20:48.509 Get Log Page Extended Data: Supported 00:20:48.509 Telemetry Log Pages: Not Supported 00:20:48.509 Persistent Event Log Pages: Not Supported 00:20:48.509 Supported Log Pages Log Page: May Support 00:20:48.509 Commands Supported & Effects Log Page: Not Supported 00:20:48.509 Feature Identifiers & Effects Log Page:May Support 00:20:48.509 NVMe-MI Commands & Effects Log Page: May Support 00:20:48.509 Data Area 4 for Telemetry Log: Not Supported 00:20:48.509 Error Log Page Entries Supported: 1 00:20:48.509 Keep Alive: Not Supported 00:20:48.509 00:20:48.509 NVM Command Set Attributes 00:20:48.509 ========================== 00:20:48.509 Submission Queue Entry Size 00:20:48.509 Max: 64 00:20:48.509 Min: 64 00:20:48.509 Completion Queue Entry Size 00:20:48.509 Max: 16 00:20:48.509 Min: 16 00:20:48.509 Number of Namespaces: 256 00:20:48.509 Compare Command: Supported 00:20:48.510 Write Uncorrectable Command: Not Supported 00:20:48.510 Dataset Management Command: Supported 00:20:48.510 Write Zeroes Command: Supported 00:20:48.510 Set Features Save Field: Supported 00:20:48.510 Reservations: Not Supported 00:20:48.510 Timestamp: Supported 00:20:48.510 Copy: Supported 00:20:48.510 Volatile Write Cache: Present 00:20:48.510 Atomic Write Unit (Normal): 1 00:20:48.510 Atomic Write Unit (PFail): 1 00:20:48.510 Atomic Compare & Write Unit: 1 00:20:48.510 Fused Compare & Write: Not Supported 00:20:48.510 Scatter-Gather List 00:20:48.510 SGL Command Set: Supported 00:20:48.510 SGL Keyed: Not Supported 00:20:48.510 SGL Bit Bucket Descriptor: Not Supported 00:20:48.510 SGL Metadata Pointer: Not Supported 00:20:48.510 Oversized SGL: Not Supported 00:20:48.510 SGL Metadata Address: Not Supported 00:20:48.510 SGL Offset: Not Supported 00:20:48.510 Transport SGL Data Block: Not Supported 00:20:48.510 Replay Protected Memory Block: Not Supported 00:20:48.510 00:20:48.510 Firmware Slot Information 00:20:48.510 ========================= 00:20:48.510 Active slot: 1 00:20:48.510 Slot 1 Firmware Revision: 1.0 00:20:48.510 00:20:48.510 00:20:48.510 Commands Supported and Effects 00:20:48.510 ============================== 00:20:48.510 Admin Commands 00:20:48.510 -------------- 00:20:48.510 Delete I/O Submission Queue (00h): Supported 00:20:48.510 Create I/O Submission Queue (01h): Supported 00:20:48.510 Get Log Page (02h): Supported 00:20:48.510 Delete I/O Completion Queue (04h): Supported 00:20:48.510 Create I/O Completion Queue (05h): Supported 00:20:48.510 Identify (06h): Supported 00:20:48.510 Abort (08h): Supported 00:20:48.510 Set Features (09h): Supported 00:20:48.510 Get Features (0Ah): Supported 00:20:48.510 Asynchronous Event Request (0Ch): Supported 00:20:48.510 Namespace Attachment (15h): Supported NS-Inventory-Change 00:20:48.510 Directive Send (19h): Supported 00:20:48.510 Directive Receive (1Ah): Supported 00:20:48.510 Virtualization Management (1Ch): Supported 00:20:48.510 Doorbell Buffer Config (7Ch): Supported 00:20:48.510 Format NVM (80h): Supported LBA-Change 00:20:48.510 I/O Commands 00:20:48.510 ------------ 00:20:48.510 Flush (00h): Supported LBA-Change 00:20:48.510 Write (01h): Supported LBA-Change 00:20:48.510 Read (02h): Supported 00:20:48.510 Compare (05h): Supported 00:20:48.510 Write Zeroes (08h): Supported LBA-Change 00:20:48.510 Dataset Management (09h): Supported LBA-Change 00:20:48.510 Unknown (0Ch): Supported 00:20:48.510 Unknown (12h): Supported 00:20:48.510 Copy (19h): Supported LBA-Change 00:20:48.510 Unknown (1Dh): Supported LBA-Change 00:20:48.510 00:20:48.510 Error Log 00:20:48.510 ========= 00:20:48.510 00:20:48.510 Arbitration 00:20:48.510 =========== 00:20:48.510 Arbitration Burst: no limit 00:20:48.510 00:20:48.510 Power Management 00:20:48.510 ================ 00:20:48.510 Number of Power States: 1 00:20:48.510 Current Power State: Power State #0 00:20:48.510 Power State #0: 00:20:48.510 Max Power: 25.00 W 00:20:48.510 Non-Operational State: Operational 00:20:48.510 Entry Latency: 16 microseconds 00:20:48.510 Exit Latency: 4 microseconds 00:20:48.510 Relative Read Throughput: 0 00:20:48.510 Relative Read Latency: 0 00:20:48.510 Relative Write Throughput: 0 00:20:48.510 Relative Write Latency: 0 00:20:48.510 Idle Power: Not Reported 00:20:48.510 Active Power: Not Reported 00:20:48.510 Non-Operational Permissive Mode: Not Supported 00:20:48.510 00:20:48.510 Health Information 00:20:48.510 ================== 00:20:48.510 Critical Warnings: 00:20:48.510 Available Spare Space: OK 00:20:48.510 Temperature: OK 00:20:48.510 Device Reliability: OK 00:20:48.510 Read Only: No 00:20:48.510 Volatile Memory Backup: OK 00:20:48.510 Current Temperature: 323 Kelvin (50 Celsius) 00:20:48.510 Temperature Threshold: 343 Kelvin (70 Celsius) 00:20:48.510 Available Spare: 0% 00:20:48.510 Available Spare Threshold: 0% 00:20:48.510 Life Percentage Used: 0% 00:20:48.510 Data Units Read: 997 00:20:48.510 Data Units Written: 926 00:20:48.510 Host Read Commands: 39189 00:20:48.510 Host Write Commands: 38612 00:20:48.510 Controller Busy Time: 0 minutes 00:20:48.510 Power Cycles: 0 00:20:48.510 Power On Hours: 0 hours 00:20:48.510 Unsafe Shutdowns: 0 00:20:48.510 Unrecoverable Media Errors: 0 00:20:48.510 Lifetime Error Log Entries: 0 00:20:48.510 Warning Temperature Time: 0 minutes 00:20:48.510 Critical Temperature Time: 0 minutes 00:20:48.510 00:20:48.510 Number of Queues 00:20:48.510 ================ 00:20:48.510 Number of I/O Submission Queues: 64 00:20:48.510 Number of I/O Completion Queues: 64 00:20:48.510 00:20:48.510 ZNS Specific Controller Data 00:20:48.510 ============================ 00:20:48.510 Zone Append Size Limit: 0 00:20:48.510 00:20:48.510 00:20:48.510 Active Namespaces 00:20:48.510 ================= 00:20:48.510 Namespace ID:1 00:20:48.510 Error Recovery Timeout: Unlimited 00:20:48.510 Command Set Identifier: NVM (00h) 00:20:48.510 Deallocate: Supported 00:20:48.510 Deallocated/Unwritten Error: Supported 00:20:48.510 Deallocated Read Value: All 0x00 00:20:48.510 Deallocate in Write Zeroes: Not Supported 00:20:48.510 Deallocated Guard Field: 0xFFFF 00:20:48.510 Flush: Supported 00:20:48.510 Reservation: Not Supported 00:20:48.510 Namespace Sharing Capabilities: Multiple Controllers 00:20:48.510 Size (in LBAs): 262144 (1GiB) 00:20:48.510 Capacity (in LBAs): 262144 (1GiB) 00:20:48.510 Utilization (in LBAs): 262144 (1GiB) 00:20:48.510 Thin Provisioning: Not Supported 00:20:48.510 Per-NS Atomic Units: No 00:20:48.510 Maximum Single Source Range Length: 128 00:20:48.510 Maximum Copy Length: 128 00:20:48.510 Maximum Source Range Count: 128 00:20:48.510 NGUID/EUI64 Never Reused: No 00:20:48.510 Namespace Write Protected: No 00:20:48.510 Endurance group ID: 1 00:20:48.510 Number of LBA Formats: 8 00:20:48.510 Current LBA Format: LBA Format #04 00:20:48.510 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:48.510 LBA Format #01: Data Size: 512 Metadata Size: 8 00:20:48.510 LBA Format #02: Data Size: 512 Metadata Size: 16 00:20:48.510 LBA Format #03: Data Size: 512 Metadata Size: 64 00:20:48.510 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:20:48.510 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:20:48.510 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:20:48.510 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:20:48.510 00:20:48.510 Get Feature FDP: 00:20:48.510 ================ 00:20:48.510 Enabled: Yes 00:20:48.510 FDP configuration index: 0 00:20:48.510 00:20:48.510 FDP configurations log page 00:20:48.510 =========================== 00:20:48.510 Number of FDP configurations: 1 00:20:48.510 Version: 0 00:20:48.510 Size: 112 00:20:48.510 FDP Configuration Descriptor: 0 00:20:48.510 Descriptor Size: 96 00:20:48.510 Reclaim Group Identifier format: 2 00:20:48.510 FDP Volatile Write Cache: Not Present 00:20:48.510 FDP Configuration: Valid 00:20:48.510 Vendor Specific Size: 0 00:20:48.510 Number of Reclaim Groups: 2 00:20:48.510 Number of Recalim Unit Handles: 8 00:20:48.510 Max Placement Identifiers: 128 00:20:48.510 Number of Namespaces Suppprted: 256 00:20:48.510 Reclaim unit Nominal Size: 6000000 bytes 00:20:48.510 Estimated Reclaim Unit Time Limit: Not Reported 00:20:48.510 RUH Desc #000: RUH Type: Initially Isolated 00:20:48.510 RUH Desc #001: RUH Type: Initially Isolated 00:20:48.510 RUH Desc #002: RUH Type: Initially Isolated 00:20:48.510 RUH Desc #003: RUH Type: Initially Isolated 00:20:48.510 RUH Desc #004: RUH Type: Initially Isolated 00:20:48.510 RUH Desc #005: RUH Type: Initially Isolated 00:20:48.510 RUH Desc #006: RUH Type: Initially Isolated 00:20:48.510 RUH Desc #007: RUH Type: Initially Isolated 00:20:48.510 00:20:48.510 FDP reclaim unit handle usage log page 00:20:48.769 ====================================== 00:20:48.769 Number of Reclaim Unit Handles: 8 00:20:48.769 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:20:48.769 RUH Usage Desc #001: RUH Attributes: Unused 00:20:48.769 RUH Usage Desc #002: RUH Attributes: Unused 00:20:48.769 RUH Usage Desc #003: RUH Attributes: Unused 00:20:48.769 RUH Usage Desc #004: RUH Attributes: Unused 00:20:48.769 RUH Usage Desc #005: RUH Attributes: Unused 00:20:48.769 RUH Usage Desc #006: RUH Attributes: Unused 00:20:48.769 RUH Usage Desc #007: RUH Attributes: Unused 00:20:48.769 00:20:48.769 FDP statistics log page 00:20:48.769 ======================= 00:20:48.769 Host bytes with metadata written: 576823296 00:20:48.769 Media bytes with metadata written: 576901120 00:20:48.769 Media bytes erased: 0 00:20:48.769 00:20:48.769 FDP events log page 00:20:48.769 =================== 00:20:48.769 Number of FDP events: 0 00:20:48.769 00:20:48.769 NVM Specific Namespace Data 00:20:48.769 =========================== 00:20:48.769 Logical Block Storage Tag Mask: 0 00:20:48.769 Protection Information Capabilities: 00:20:48.769 16b Guard Protection Information Storage Tag Support: No 00:20:48.769 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:20:48.769 Storage Tag Check Read Support: No 00:20:48.769 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.769 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.769 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.769 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.769 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.769 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.769 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.769 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:20:48.769 00:20:48.769 real 0m1.769s 00:20:48.769 user 0m0.636s 00:20:48.769 sys 0m0.900s 00:20:48.769 ************************************ 00:20:48.769 END TEST nvme_identify 00:20:48.769 ************************************ 00:20:48.769 16:35:24 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:48.769 16:35:24 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.769 16:35:24 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:20:48.769 16:35:24 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:48.769 16:35:24 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:48.769 16:35:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:48.769 ************************************ 00:20:48.769 START TEST nvme_perf 00:20:48.769 ************************************ 00:20:48.769 16:35:24 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:20:48.769 16:35:24 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:20:50.151 Initializing NVMe Controllers 00:20:50.151 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:50.151 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:50.151 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:50.151 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:50.151 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:50.151 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:20:50.151 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:20:50.151 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:20:50.151 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:20:50.151 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:20:50.151 Initialization complete. Launching workers. 00:20:50.151 ======================================================== 00:20:50.151 Latency(us) 00:20:50.151 Device Information : IOPS MiB/s Average min max 00:20:50.151 PCIE (0000:00:10.0) NSID 1 from core 0: 12471.76 146.15 10290.47 7986.56 46844.82 00:20:50.151 PCIE (0000:00:11.0) NSID 1 from core 0: 12471.76 146.15 10270.71 8082.06 44323.00 00:20:50.151 PCIE (0000:00:13.0) NSID 1 from core 0: 12471.76 146.15 10248.25 8105.23 42382.05 00:20:50.151 PCIE (0000:00:12.0) NSID 1 from core 0: 12471.76 146.15 10225.30 8094.46 39642.06 00:20:50.151 PCIE (0000:00:12.0) NSID 2 from core 0: 12471.76 146.15 10201.37 8148.91 37186.65 00:20:50.151 PCIE (0000:00:12.0) NSID 3 from core 0: 12535.71 146.90 10126.99 8088.76 28741.28 00:20:50.151 ======================================================== 00:20:50.151 Total : 74894.49 877.67 10227.10 7986.56 46844.82 00:20:50.151 00:20:50.151 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:20:50.151 ================================================================================= 00:20:50.151 1.00000% : 8264.379us 00:20:50.151 10.00000% : 8738.133us 00:20:50.152 25.00000% : 9106.609us 00:20:50.152 50.00000% : 9685.642us 00:20:50.152 75.00000% : 10527.871us 00:20:50.152 90.00000% : 11370.101us 00:20:50.152 95.00000% : 12422.888us 00:20:50.152 98.00000% : 17370.988us 00:20:50.152 99.00000% : 37268.665us 00:20:50.152 99.50000% : 44638.175us 00:20:50.152 99.90000% : 46533.192us 00:20:50.152 99.99000% : 46954.307us 00:20:50.152 99.99900% : 46954.307us 00:20:50.152 99.99990% : 46954.307us 00:20:50.152 99.99999% : 46954.307us 00:20:50.152 00:20:50.152 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:20:50.152 ================================================================================= 00:20:50.152 1.00000% : 8369.658us 00:20:50.152 10.00000% : 8738.133us 00:20:50.152 25.00000% : 9106.609us 00:20:50.152 50.00000% : 9685.642us 00:20:50.152 75.00000% : 10527.871us 00:20:50.152 90.00000% : 11317.462us 00:20:50.152 95.00000% : 12633.446us 00:20:50.152 98.00000% : 17581.545us 00:20:50.152 99.00000% : 34952.533us 00:20:50.152 99.50000% : 42322.043us 00:20:50.152 99.90000% : 44006.503us 00:20:50.152 99.99000% : 44427.618us 00:20:50.152 99.99900% : 44427.618us 00:20:50.152 99.99990% : 44427.618us 00:20:50.152 99.99999% : 44427.618us 00:20:50.152 00:20:50.152 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:20:50.152 ================================================================================= 00:20:50.152 1.00000% : 8369.658us 00:20:50.152 10.00000% : 8790.773us 00:20:50.152 25.00000% : 9106.609us 00:20:50.152 50.00000% : 9633.002us 00:20:50.152 75.00000% : 10527.871us 00:20:50.152 90.00000% : 11264.822us 00:20:50.152 95.00000% : 12738.724us 00:20:50.152 98.00000% : 18107.939us 00:20:50.152 99.00000% : 32636.402us 00:20:50.152 99.50000% : 40216.469us 00:20:50.152 99.90000% : 42111.486us 00:20:50.152 99.99000% : 42532.601us 00:20:50.152 99.99900% : 42532.601us 00:20:50.152 99.99990% : 42532.601us 00:20:50.152 99.99999% : 42532.601us 00:20:50.152 00:20:50.152 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:20:50.152 ================================================================================= 00:20:50.152 1.00000% : 8317.018us 00:20:50.152 10.00000% : 8790.773us 00:20:50.152 25.00000% : 9106.609us 00:20:50.152 50.00000% : 9633.002us 00:20:50.152 75.00000% : 10527.871us 00:20:50.152 90.00000% : 11317.462us 00:20:50.152 95.00000% : 12949.282us 00:20:50.152 98.00000% : 17897.382us 00:20:50.152 99.00000% : 30320.270us 00:20:50.152 99.50000% : 37479.222us 00:20:50.152 99.90000% : 39374.239us 00:20:50.152 99.99000% : 39795.354us 00:20:50.152 99.99900% : 39795.354us 00:20:50.152 99.99990% : 39795.354us 00:20:50.152 99.99999% : 39795.354us 00:20:50.152 00:20:50.152 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:20:50.152 ================================================================================= 00:20:50.152 1.00000% : 8369.658us 00:20:50.152 10.00000% : 8790.773us 00:20:50.152 25.00000% : 9106.609us 00:20:50.152 50.00000% : 9633.002us 00:20:50.152 75.00000% : 10527.871us 00:20:50.152 90.00000% : 11317.462us 00:20:50.152 95.00000% : 12738.724us 00:20:50.152 98.00000% : 17581.545us 00:20:50.152 99.00000% : 27583.023us 00:20:50.152 99.50000% : 34741.976us 00:20:50.152 99.90000% : 36847.550us 00:20:50.152 99.99000% : 37268.665us 00:20:50.152 99.99900% : 37268.665us 00:20:50.152 99.99990% : 37268.665us 00:20:50.152 99.99999% : 37268.665us 00:20:50.152 00:20:50.152 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:20:50.152 ================================================================================= 00:20:50.152 1.00000% : 8369.658us 00:20:50.152 10.00000% : 8790.773us 00:20:50.152 25.00000% : 9106.609us 00:20:50.152 50.00000% : 9685.642us 00:20:50.152 75.00000% : 10580.511us 00:20:50.152 90.00000% : 11370.101us 00:20:50.152 95.00000% : 12633.446us 00:20:50.152 98.00000% : 17370.988us 00:20:50.152 99.00000% : 20213.513us 00:20:50.152 99.50000% : 26530.236us 00:20:50.152 99.90000% : 28425.253us 00:20:50.152 99.99000% : 28846.368us 00:20:50.152 99.99900% : 28846.368us 00:20:50.152 99.99990% : 28846.368us 00:20:50.152 99.99999% : 28846.368us 00:20:50.152 00:20:50.152 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:20:50.152 ============================================================================== 00:20:50.152 Range in us Cumulative IO count 00:20:50.152 7948.543 - 8001.182: 0.0080% ( 1) 00:20:50.152 8001.182 - 8053.822: 0.0321% ( 3) 00:20:50.152 8053.822 - 8106.461: 0.1282% ( 12) 00:20:50.152 8106.461 - 8159.100: 0.3285% ( 25) 00:20:50.152 8159.100 - 8211.740: 0.6330% ( 38) 00:20:50.152 8211.740 - 8264.379: 1.0016% ( 46) 00:20:50.152 8264.379 - 8317.018: 1.4503% ( 56) 00:20:50.152 8317.018 - 8369.658: 2.1074% ( 82) 00:20:50.152 8369.658 - 8422.297: 2.9247% ( 102) 00:20:50.152 8422.297 - 8474.937: 3.8381% ( 114) 00:20:50.152 8474.937 - 8527.576: 4.8878% ( 131) 00:20:50.152 8527.576 - 8580.215: 6.1779% ( 161) 00:20:50.152 8580.215 - 8632.855: 7.7885% ( 201) 00:20:50.152 8632.855 - 8685.494: 9.6234% ( 229) 00:20:50.152 8685.494 - 8738.133: 11.4663% ( 230) 00:20:50.152 8738.133 - 8790.773: 13.5978% ( 266) 00:20:50.152 8790.773 - 8843.412: 15.8814% ( 285) 00:20:50.152 8843.412 - 8896.051: 18.0208% ( 267) 00:20:50.152 8896.051 - 8948.691: 20.2163% ( 274) 00:20:50.152 8948.691 - 9001.330: 22.4359% ( 277) 00:20:50.152 9001.330 - 9053.969: 24.7115% ( 284) 00:20:50.152 9053.969 - 9106.609: 27.0994% ( 298) 00:20:50.152 9106.609 - 9159.248: 29.6554% ( 319) 00:20:50.152 9159.248 - 9211.888: 32.1474% ( 311) 00:20:50.152 9211.888 - 9264.527: 34.6154% ( 308) 00:20:50.152 9264.527 - 9317.166: 36.9391% ( 290) 00:20:50.152 9317.166 - 9369.806: 39.2067% ( 283) 00:20:50.152 9369.806 - 9422.445: 41.5385% ( 291) 00:20:50.152 9422.445 - 9475.084: 43.7981% ( 282) 00:20:50.152 9475.084 - 9527.724: 45.9215% ( 265) 00:20:50.152 9527.724 - 9580.363: 47.9728% ( 256) 00:20:50.152 9580.363 - 9633.002: 49.8798% ( 238) 00:20:50.152 9633.002 - 9685.642: 51.5465% ( 208) 00:20:50.152 9685.642 - 9738.281: 53.1330% ( 198) 00:20:50.152 9738.281 - 9790.920: 54.7516% ( 202) 00:20:50.152 9790.920 - 9843.560: 56.2340% ( 185) 00:20:50.152 9843.560 - 9896.199: 57.7724% ( 192) 00:20:50.152 9896.199 - 9948.839: 59.2067% ( 179) 00:20:50.152 9948.839 - 10001.478: 60.8413% ( 204) 00:20:50.152 10001.478 - 10054.117: 62.4679% ( 203) 00:20:50.152 10054.117 - 10106.757: 64.0946% ( 203) 00:20:50.152 10106.757 - 10159.396: 65.5449% ( 181) 00:20:50.152 10159.396 - 10212.035: 67.0833% ( 192) 00:20:50.152 10212.035 - 10264.675: 68.3814% ( 162) 00:20:50.152 10264.675 - 10317.314: 69.8237% ( 180) 00:20:50.152 10317.314 - 10369.953: 71.2580% ( 179) 00:20:50.152 10369.953 - 10422.593: 72.6683% ( 176) 00:20:50.152 10422.593 - 10475.232: 73.8942% ( 153) 00:20:50.152 10475.232 - 10527.871: 75.2324% ( 167) 00:20:50.152 10527.871 - 10580.511: 76.4744% ( 155) 00:20:50.152 10580.511 - 10633.150: 77.7564% ( 160) 00:20:50.152 10633.150 - 10685.790: 79.0625% ( 163) 00:20:50.152 10685.790 - 10738.429: 80.2404% ( 147) 00:20:50.152 10738.429 - 10791.068: 81.3782% ( 142) 00:20:50.152 10791.068 - 10843.708: 82.4760% ( 137) 00:20:50.152 10843.708 - 10896.347: 83.5176% ( 130) 00:20:50.152 10896.347 - 10948.986: 84.5433% ( 128) 00:20:50.152 10948.986 - 11001.626: 85.5369% ( 124) 00:20:50.152 11001.626 - 11054.265: 86.3782% ( 105) 00:20:50.152 11054.265 - 11106.904: 87.2516% ( 109) 00:20:50.152 11106.904 - 11159.544: 88.0449% ( 99) 00:20:50.152 11159.544 - 11212.183: 88.6779% ( 79) 00:20:50.152 11212.183 - 11264.822: 89.2949% ( 77) 00:20:50.152 11264.822 - 11317.462: 89.8478% ( 69) 00:20:50.152 11317.462 - 11370.101: 90.3686% ( 65) 00:20:50.152 11370.101 - 11422.741: 90.8494% ( 60) 00:20:50.152 11422.741 - 11475.380: 91.3061% ( 57) 00:20:50.152 11475.380 - 11528.019: 91.7308% ( 53) 00:20:50.153 11528.019 - 11580.659: 92.0513% ( 40) 00:20:50.153 11580.659 - 11633.298: 92.3558% ( 38) 00:20:50.153 11633.298 - 11685.937: 92.6122% ( 32) 00:20:50.153 11685.937 - 11738.577: 92.8846% ( 34) 00:20:50.153 11738.577 - 11791.216: 93.1651% ( 35) 00:20:50.153 11791.216 - 11843.855: 93.3814% ( 27) 00:20:50.153 11843.855 - 11896.495: 93.6058% ( 28) 00:20:50.153 11896.495 - 11949.134: 93.7740% ( 21) 00:20:50.153 11949.134 - 12001.773: 93.9343% ( 20) 00:20:50.153 12001.773 - 12054.413: 94.0946% ( 20) 00:20:50.153 12054.413 - 12107.052: 94.2949% ( 25) 00:20:50.153 12107.052 - 12159.692: 94.4151% ( 15) 00:20:50.153 12159.692 - 12212.331: 94.5753% ( 20) 00:20:50.153 12212.331 - 12264.970: 94.6955% ( 15) 00:20:50.153 12264.970 - 12317.610: 94.8317% ( 17) 00:20:50.153 12317.610 - 12370.249: 94.9119% ( 10) 00:20:50.153 12370.249 - 12422.888: 95.0000% ( 11) 00:20:50.153 12422.888 - 12475.528: 95.0801% ( 10) 00:20:50.153 12475.528 - 12528.167: 95.1683% ( 11) 00:20:50.153 12528.167 - 12580.806: 95.2484% ( 10) 00:20:50.153 12580.806 - 12633.446: 95.3285% ( 10) 00:20:50.153 12633.446 - 12686.085: 95.4167% ( 11) 00:20:50.153 12686.085 - 12738.724: 95.4968% ( 10) 00:20:50.153 12738.724 - 12791.364: 95.5689% ( 9) 00:20:50.153 12791.364 - 12844.003: 95.6330% ( 8) 00:20:50.153 12844.003 - 12896.643: 95.7131% ( 10) 00:20:50.153 12896.643 - 12949.282: 95.8013% ( 11) 00:20:50.153 12949.282 - 13001.921: 95.9054% ( 13) 00:20:50.153 13001.921 - 13054.561: 95.9455% ( 5) 00:20:50.153 13054.561 - 13107.200: 96.0096% ( 8) 00:20:50.153 13107.200 - 13159.839: 96.0657% ( 7) 00:20:50.153 13159.839 - 13212.479: 96.1058% ( 5) 00:20:50.153 13212.479 - 13265.118: 96.1378% ( 4) 00:20:50.153 13265.118 - 13317.757: 96.1939% ( 7) 00:20:50.153 13317.757 - 13370.397: 96.2420% ( 6) 00:20:50.153 13370.397 - 13423.036: 96.2580% ( 2) 00:20:50.153 13423.036 - 13475.676: 96.2901% ( 4) 00:20:50.153 13475.676 - 13580.954: 96.3622% ( 9) 00:20:50.153 13580.954 - 13686.233: 96.4103% ( 6) 00:20:50.153 13686.233 - 13791.512: 96.4824% ( 9) 00:20:50.153 13791.512 - 13896.790: 96.5465% ( 8) 00:20:50.153 13896.790 - 14002.069: 96.5785% ( 4) 00:20:50.153 14002.069 - 14107.348: 96.6186% ( 5) 00:20:50.153 14107.348 - 14212.627: 96.6587% ( 5) 00:20:50.153 14212.627 - 14317.905: 96.6907% ( 4) 00:20:50.153 14317.905 - 14423.184: 96.7228% ( 4) 00:20:50.153 14423.184 - 14528.463: 96.7548% ( 4) 00:20:50.153 14528.463 - 14633.741: 96.7949% ( 5) 00:20:50.153 14633.741 - 14739.020: 96.8269% ( 4) 00:20:50.153 14739.020 - 14844.299: 96.8750% ( 6) 00:20:50.153 14844.299 - 14949.578: 96.9391% ( 8) 00:20:50.153 14949.578 - 15054.856: 97.0112% ( 9) 00:20:50.153 15054.856 - 15160.135: 97.0593% ( 6) 00:20:50.153 15160.135 - 15265.414: 97.1234% ( 8) 00:20:50.153 15265.414 - 15370.692: 97.1715% ( 6) 00:20:50.153 15370.692 - 15475.971: 97.2276% ( 7) 00:20:50.153 15475.971 - 15581.250: 97.2837% ( 7) 00:20:50.153 15581.250 - 15686.529: 97.3317% ( 6) 00:20:50.153 15686.529 - 15791.807: 97.3958% ( 8) 00:20:50.153 15791.807 - 15897.086: 97.4439% ( 6) 00:20:50.153 15897.086 - 16002.365: 97.5080% ( 8) 00:20:50.153 16002.365 - 16107.643: 97.5561% ( 6) 00:20:50.153 16107.643 - 16212.922: 97.6122% ( 7) 00:20:50.153 16212.922 - 16318.201: 97.6683% ( 7) 00:20:50.153 16318.201 - 16423.480: 97.7244% ( 7) 00:20:50.153 16423.480 - 16528.758: 97.7804% ( 7) 00:20:50.153 16528.758 - 16634.037: 97.8365% ( 7) 00:20:50.153 16634.037 - 16739.316: 97.8926% ( 7) 00:20:50.153 16739.316 - 16844.594: 97.9247% ( 4) 00:20:50.153 16844.594 - 16949.873: 97.9487% ( 3) 00:20:50.153 17160.431 - 17265.709: 97.9808% ( 4) 00:20:50.153 17265.709 - 17370.988: 98.0048% ( 3) 00:20:50.153 17370.988 - 17476.267: 98.0208% ( 2) 00:20:50.153 17476.267 - 17581.545: 98.0449% ( 3) 00:20:50.153 17581.545 - 17686.824: 98.0689% ( 3) 00:20:50.153 17686.824 - 17792.103: 98.0849% ( 2) 00:20:50.153 17792.103 - 17897.382: 98.1170% ( 4) 00:20:50.153 17897.382 - 18002.660: 98.1330% ( 2) 00:20:50.153 18002.660 - 18107.939: 98.1571% ( 3) 00:20:50.153 18107.939 - 18213.218: 98.1731% ( 2) 00:20:50.153 18213.218 - 18318.496: 98.2131% ( 5) 00:20:50.153 18318.496 - 18423.775: 98.2532% ( 5) 00:20:50.153 18423.775 - 18529.054: 98.2853% ( 4) 00:20:50.153 18529.054 - 18634.333: 98.3413% ( 7) 00:20:50.153 18634.333 - 18739.611: 98.3734% ( 4) 00:20:50.153 18739.611 - 18844.890: 98.4215% ( 6) 00:20:50.153 18844.890 - 18950.169: 98.4696% ( 6) 00:20:50.153 18950.169 - 19055.447: 98.4936% ( 3) 00:20:50.153 19055.447 - 19160.726: 98.5417% ( 6) 00:20:50.153 19160.726 - 19266.005: 98.5817% ( 5) 00:20:50.153 19266.005 - 19371.284: 98.6378% ( 7) 00:20:50.153 19371.284 - 19476.562: 98.6699% ( 4) 00:20:50.153 19476.562 - 19581.841: 98.7099% ( 5) 00:20:50.153 19581.841 - 19687.120: 98.7500% ( 5) 00:20:50.153 19687.120 - 19792.398: 98.7660% ( 2) 00:20:50.153 19792.398 - 19897.677: 98.7901% ( 3) 00:20:50.153 19897.677 - 20002.956: 98.8061% ( 2) 00:20:50.153 20002.956 - 20108.235: 98.8301% ( 3) 00:20:50.153 20108.235 - 20213.513: 98.8462% ( 2) 00:20:50.153 20213.513 - 20318.792: 98.8702% ( 3) 00:20:50.153 20318.792 - 20424.071: 98.8942% ( 3) 00:20:50.153 20424.071 - 20529.349: 98.9103% ( 2) 00:20:50.153 20529.349 - 20634.628: 98.9343% ( 3) 00:20:50.153 20634.628 - 20739.907: 98.9583% ( 3) 00:20:50.153 20739.907 - 20845.186: 98.9744% ( 2) 00:20:50.153 36847.550 - 37058.108: 98.9984% ( 3) 00:20:50.153 37058.108 - 37268.665: 99.0385% ( 5) 00:20:50.153 37268.665 - 37479.222: 99.0785% ( 5) 00:20:50.153 37479.222 - 37689.780: 99.1266% ( 6) 00:20:50.153 37689.780 - 37900.337: 99.1747% ( 6) 00:20:50.153 37900.337 - 38110.895: 99.2228% ( 6) 00:20:50.153 38110.895 - 38321.452: 99.2708% ( 6) 00:20:50.153 38321.452 - 38532.010: 99.3189% ( 6) 00:20:50.153 38532.010 - 38742.567: 99.3590% ( 5) 00:20:50.153 38742.567 - 38953.124: 99.4071% ( 6) 00:20:50.153 38953.124 - 39163.682: 99.4551% ( 6) 00:20:50.153 39163.682 - 39374.239: 99.4872% ( 4) 00:20:50.153 44427.618 - 44638.175: 99.5272% ( 5) 00:20:50.153 44638.175 - 44848.733: 99.5753% ( 6) 00:20:50.153 44848.733 - 45059.290: 99.6154% ( 5) 00:20:50.153 45059.290 - 45269.847: 99.6635% ( 6) 00:20:50.153 45269.847 - 45480.405: 99.7035% ( 5) 00:20:50.153 45480.405 - 45690.962: 99.7516% ( 6) 00:20:50.153 45690.962 - 45901.520: 99.7997% ( 6) 00:20:50.153 45901.520 - 46112.077: 99.8478% ( 6) 00:20:50.153 46112.077 - 46322.635: 99.8878% ( 5) 00:20:50.153 46322.635 - 46533.192: 99.9279% ( 5) 00:20:50.153 46533.192 - 46743.749: 99.9760% ( 6) 00:20:50.153 46743.749 - 46954.307: 100.0000% ( 3) 00:20:50.153 00:20:50.153 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:20:50.153 ============================================================================== 00:20:50.153 Range in us Cumulative IO count 00:20:50.153 8053.822 - 8106.461: 0.0240% ( 3) 00:20:50.153 8106.461 - 8159.100: 0.0881% ( 8) 00:20:50.153 8159.100 - 8211.740: 0.2564% ( 21) 00:20:50.153 8211.740 - 8264.379: 0.5449% ( 36) 00:20:50.153 8264.379 - 8317.018: 0.9295% ( 48) 00:20:50.153 8317.018 - 8369.658: 1.4343% ( 63) 00:20:50.153 8369.658 - 8422.297: 2.1074% ( 84) 00:20:50.153 8422.297 - 8474.937: 2.9728% ( 108) 00:20:50.153 8474.937 - 8527.576: 4.0385% ( 133) 00:20:50.153 8527.576 - 8580.215: 5.2244% ( 148) 00:20:50.153 8580.215 - 8632.855: 6.6266% ( 175) 00:20:50.153 8632.855 - 8685.494: 8.2292% ( 200) 00:20:50.153 8685.494 - 8738.133: 10.2163% ( 248) 00:20:50.153 8738.133 - 8790.773: 12.3558% ( 267) 00:20:50.153 8790.773 - 8843.412: 14.7356% ( 297) 00:20:50.153 8843.412 - 8896.051: 17.2035% ( 308) 00:20:50.153 8896.051 - 8948.691: 19.7276% ( 315) 00:20:50.153 8948.691 - 9001.330: 22.3237% ( 324) 00:20:50.153 9001.330 - 9053.969: 24.9038% ( 322) 00:20:50.153 9053.969 - 9106.609: 27.3237% ( 302) 00:20:50.153 9106.609 - 9159.248: 29.8878% ( 320) 00:20:50.153 9159.248 - 9211.888: 32.4760% ( 323) 00:20:50.153 9211.888 - 9264.527: 34.9599% ( 310) 00:20:50.153 9264.527 - 9317.166: 37.4199% ( 307) 00:20:50.153 9317.166 - 9369.806: 39.7436% ( 290) 00:20:50.153 9369.806 - 9422.445: 42.0753% ( 291) 00:20:50.153 9422.445 - 9475.084: 44.2147% ( 267) 00:20:50.153 9475.084 - 9527.724: 46.2019% ( 248) 00:20:50.153 9527.724 - 9580.363: 48.0288% ( 228) 00:20:50.153 9580.363 - 9633.002: 49.7516% ( 215) 00:20:50.153 9633.002 - 9685.642: 51.4423% ( 211) 00:20:50.153 9685.642 - 9738.281: 53.1010% ( 207) 00:20:50.153 9738.281 - 9790.920: 54.7035% ( 200) 00:20:50.153 9790.920 - 9843.560: 56.1939% ( 186) 00:20:50.153 9843.560 - 9896.199: 57.8285% ( 204) 00:20:50.153 9896.199 - 9948.839: 59.3750% ( 193) 00:20:50.153 9948.839 - 10001.478: 60.9455% ( 196) 00:20:50.153 10001.478 - 10054.117: 62.4679% ( 190) 00:20:50.153 10054.117 - 10106.757: 63.9583% ( 186) 00:20:50.153 10106.757 - 10159.396: 65.3686% ( 176) 00:20:50.153 10159.396 - 10212.035: 66.8910% ( 190) 00:20:50.153 10212.035 - 10264.675: 68.3013% ( 176) 00:20:50.153 10264.675 - 10317.314: 69.7596% ( 182) 00:20:50.153 10317.314 - 10369.953: 71.1859% ( 178) 00:20:50.153 10369.953 - 10422.593: 72.6282% ( 180) 00:20:50.153 10422.593 - 10475.232: 74.1426% ( 189) 00:20:50.153 10475.232 - 10527.871: 75.6170% ( 184) 00:20:50.153 10527.871 - 10580.511: 76.9631% ( 168) 00:20:50.153 10580.511 - 10633.150: 78.3013% ( 167) 00:20:50.153 10633.150 - 10685.790: 79.5593% ( 157) 00:20:50.153 10685.790 - 10738.429: 80.7532% ( 149) 00:20:50.153 10738.429 - 10791.068: 81.9151% ( 145) 00:20:50.153 10791.068 - 10843.708: 83.0128% ( 137) 00:20:50.153 10843.708 - 10896.347: 84.0625% ( 131) 00:20:50.153 10896.347 - 10948.986: 85.1042% ( 130) 00:20:50.153 10948.986 - 11001.626: 86.0417% ( 117) 00:20:50.153 11001.626 - 11054.265: 86.9631% ( 115) 00:20:50.154 11054.265 - 11106.904: 87.8285% ( 108) 00:20:50.154 11106.904 - 11159.544: 88.5577% ( 91) 00:20:50.154 11159.544 - 11212.183: 89.2468% ( 86) 00:20:50.154 11212.183 - 11264.822: 89.8397% ( 74) 00:20:50.154 11264.822 - 11317.462: 90.4087% ( 71) 00:20:50.154 11317.462 - 11370.101: 90.9054% ( 62) 00:20:50.154 11370.101 - 11422.741: 91.3942% ( 61) 00:20:50.154 11422.741 - 11475.380: 91.8189% ( 53) 00:20:50.154 11475.380 - 11528.019: 92.1715% ( 44) 00:20:50.154 11528.019 - 11580.659: 92.4679% ( 37) 00:20:50.154 11580.659 - 11633.298: 92.7484% ( 35) 00:20:50.154 11633.298 - 11685.937: 92.9968% ( 31) 00:20:50.154 11685.937 - 11738.577: 93.2051% ( 26) 00:20:50.154 11738.577 - 11791.216: 93.3734% ( 21) 00:20:50.154 11791.216 - 11843.855: 93.5256% ( 19) 00:20:50.154 11843.855 - 11896.495: 93.6699% ( 18) 00:20:50.154 11896.495 - 11949.134: 93.7981% ( 16) 00:20:50.154 11949.134 - 12001.773: 93.9503% ( 19) 00:20:50.154 12001.773 - 12054.413: 94.0785% ( 16) 00:20:50.154 12054.413 - 12107.052: 94.1907% ( 14) 00:20:50.154 12107.052 - 12159.692: 94.2869% ( 12) 00:20:50.154 12159.692 - 12212.331: 94.3750% ( 11) 00:20:50.154 12212.331 - 12264.970: 94.4551% ( 10) 00:20:50.154 12264.970 - 12317.610: 94.5272% ( 9) 00:20:50.154 12317.610 - 12370.249: 94.6074% ( 10) 00:20:50.154 12370.249 - 12422.888: 94.6875% ( 10) 00:20:50.154 12422.888 - 12475.528: 94.7596% ( 9) 00:20:50.154 12475.528 - 12528.167: 94.8317% ( 9) 00:20:50.154 12528.167 - 12580.806: 94.9119% ( 10) 00:20:50.154 12580.806 - 12633.446: 95.0000% ( 11) 00:20:50.154 12633.446 - 12686.085: 95.0881% ( 11) 00:20:50.154 12686.085 - 12738.724: 95.1763% ( 11) 00:20:50.154 12738.724 - 12791.364: 95.2804% ( 13) 00:20:50.154 12791.364 - 12844.003: 95.3606% ( 10) 00:20:50.154 12844.003 - 12896.643: 95.4647% ( 13) 00:20:50.154 12896.643 - 12949.282: 95.5529% ( 11) 00:20:50.154 12949.282 - 13001.921: 95.6410% ( 11) 00:20:50.154 13001.921 - 13054.561: 95.7131% ( 9) 00:20:50.154 13054.561 - 13107.200: 95.7692% ( 7) 00:20:50.154 13107.200 - 13159.839: 95.8253% ( 7) 00:20:50.154 13159.839 - 13212.479: 95.8814% ( 7) 00:20:50.154 13212.479 - 13265.118: 95.9295% ( 6) 00:20:50.154 13265.118 - 13317.757: 95.9535% ( 3) 00:20:50.154 13317.757 - 13370.397: 95.9856% ( 4) 00:20:50.154 13370.397 - 13423.036: 96.0176% ( 4) 00:20:50.154 13423.036 - 13475.676: 96.0497% ( 4) 00:20:50.154 13475.676 - 13580.954: 96.1058% ( 7) 00:20:50.154 13580.954 - 13686.233: 96.1699% ( 8) 00:20:50.154 13686.233 - 13791.512: 96.2260% ( 7) 00:20:50.154 13791.512 - 13896.790: 96.2660% ( 5) 00:20:50.154 13896.790 - 14002.069: 96.3301% ( 8) 00:20:50.154 14002.069 - 14107.348: 96.4022% ( 9) 00:20:50.154 14107.348 - 14212.627: 96.4904% ( 11) 00:20:50.154 14212.627 - 14317.905: 96.5625% ( 9) 00:20:50.154 14317.905 - 14423.184: 96.6266% ( 8) 00:20:50.154 14423.184 - 14528.463: 96.6667% ( 5) 00:20:50.154 14528.463 - 14633.741: 96.7308% ( 8) 00:20:50.154 14633.741 - 14739.020: 96.7949% ( 8) 00:20:50.154 14739.020 - 14844.299: 96.8670% ( 9) 00:20:50.154 14844.299 - 14949.578: 96.9471% ( 10) 00:20:50.154 14949.578 - 15054.856: 97.0192% ( 9) 00:20:50.154 15054.856 - 15160.135: 97.0994% ( 10) 00:20:50.154 15160.135 - 15265.414: 97.1314% ( 4) 00:20:50.154 15265.414 - 15370.692: 97.1635% ( 4) 00:20:50.154 15370.692 - 15475.971: 97.1955% ( 4) 00:20:50.154 15475.971 - 15581.250: 97.2276% ( 4) 00:20:50.154 15581.250 - 15686.529: 97.2596% ( 4) 00:20:50.154 15686.529 - 15791.807: 97.2837% ( 3) 00:20:50.154 15791.807 - 15897.086: 97.3317% ( 6) 00:20:50.154 15897.086 - 16002.365: 97.4038% ( 9) 00:20:50.154 16002.365 - 16107.643: 97.4679% ( 8) 00:20:50.154 16107.643 - 16212.922: 97.5321% ( 8) 00:20:50.154 16212.922 - 16318.201: 97.5881% ( 7) 00:20:50.154 16318.201 - 16423.480: 97.6202% ( 4) 00:20:50.154 16423.480 - 16528.758: 97.6522% ( 4) 00:20:50.154 16528.758 - 16634.037: 97.6843% ( 4) 00:20:50.154 16634.037 - 16739.316: 97.7163% ( 4) 00:20:50.154 16739.316 - 16844.594: 97.7484% ( 4) 00:20:50.154 16844.594 - 16949.873: 97.7724% ( 3) 00:20:50.154 16949.873 - 17055.152: 97.8045% ( 4) 00:20:50.154 17055.152 - 17160.431: 97.8365% ( 4) 00:20:50.154 17160.431 - 17265.709: 97.8686% ( 4) 00:20:50.154 17265.709 - 17370.988: 97.9167% ( 6) 00:20:50.154 17370.988 - 17476.267: 97.9728% ( 7) 00:20:50.154 17476.267 - 17581.545: 98.0128% ( 5) 00:20:50.154 17581.545 - 17686.824: 98.0449% ( 4) 00:20:50.154 17686.824 - 17792.103: 98.0689% ( 3) 00:20:50.154 17792.103 - 17897.382: 98.0929% ( 3) 00:20:50.154 17897.382 - 18002.660: 98.1090% ( 2) 00:20:50.154 18002.660 - 18107.939: 98.1571% ( 6) 00:20:50.154 18107.939 - 18213.218: 98.2051% ( 6) 00:20:50.154 18213.218 - 18318.496: 98.2532% ( 6) 00:20:50.154 18318.496 - 18423.775: 98.3093% ( 7) 00:20:50.154 18423.775 - 18529.054: 98.3654% ( 7) 00:20:50.154 18529.054 - 18634.333: 98.4135% ( 6) 00:20:50.154 18634.333 - 18739.611: 98.4615% ( 6) 00:20:50.154 18739.611 - 18844.890: 98.5016% ( 5) 00:20:50.154 18844.890 - 18950.169: 98.5577% ( 7) 00:20:50.154 18950.169 - 19055.447: 98.6058% ( 6) 00:20:50.154 19055.447 - 19160.726: 98.6619% ( 7) 00:20:50.154 19160.726 - 19266.005: 98.7099% ( 6) 00:20:50.154 19266.005 - 19371.284: 98.7660% ( 7) 00:20:50.154 19371.284 - 19476.562: 98.8061% ( 5) 00:20:50.154 19476.562 - 19581.841: 98.8301% ( 3) 00:20:50.154 19581.841 - 19687.120: 98.8542% ( 3) 00:20:50.154 19687.120 - 19792.398: 98.8862% ( 4) 00:20:50.154 19792.398 - 19897.677: 98.9103% ( 3) 00:20:50.154 19897.677 - 20002.956: 98.9343% ( 3) 00:20:50.154 20002.956 - 20108.235: 98.9583% ( 3) 00:20:50.154 20108.235 - 20213.513: 98.9744% ( 2) 00:20:50.154 34531.418 - 34741.976: 98.9824% ( 1) 00:20:50.154 34741.976 - 34952.533: 99.0224% ( 5) 00:20:50.154 34952.533 - 35163.091: 99.0625% ( 5) 00:20:50.154 35163.091 - 35373.648: 99.1106% ( 6) 00:20:50.154 35373.648 - 35584.206: 99.1587% ( 6) 00:20:50.154 35584.206 - 35794.763: 99.2147% ( 7) 00:20:50.154 35794.763 - 36005.320: 99.2628% ( 6) 00:20:50.154 36005.320 - 36215.878: 99.3109% ( 6) 00:20:50.154 36215.878 - 36426.435: 99.3590% ( 6) 00:20:50.154 36426.435 - 36636.993: 99.3990% ( 5) 00:20:50.154 36636.993 - 36847.550: 99.4471% ( 6) 00:20:50.154 36847.550 - 37058.108: 99.4872% ( 5) 00:20:50.154 41900.929 - 42111.486: 99.4952% ( 1) 00:20:50.154 42111.486 - 42322.043: 99.5433% ( 6) 00:20:50.154 42322.043 - 42532.601: 99.5913% ( 6) 00:20:50.154 42532.601 - 42743.158: 99.6314% ( 5) 00:20:50.154 42743.158 - 42953.716: 99.6795% ( 6) 00:20:50.154 42953.716 - 43164.273: 99.7276% ( 6) 00:20:50.154 43164.273 - 43374.831: 99.7837% ( 7) 00:20:50.154 43374.831 - 43585.388: 99.8237% ( 5) 00:20:50.154 43585.388 - 43795.945: 99.8798% ( 7) 00:20:50.154 43795.945 - 44006.503: 99.9279% ( 6) 00:20:50.154 44006.503 - 44217.060: 99.9760% ( 6) 00:20:50.154 44217.060 - 44427.618: 100.0000% ( 3) 00:20:50.154 00:20:50.154 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:20:50.154 ============================================================================== 00:20:50.154 Range in us Cumulative IO count 00:20:50.154 8053.822 - 8106.461: 0.0080% ( 1) 00:20:50.154 8106.461 - 8159.100: 0.1603% ( 19) 00:20:50.154 8159.100 - 8211.740: 0.3446% ( 23) 00:20:50.154 8211.740 - 8264.379: 0.6010% ( 32) 00:20:50.154 8264.379 - 8317.018: 0.9856% ( 48) 00:20:50.154 8317.018 - 8369.658: 1.4663% ( 60) 00:20:50.154 8369.658 - 8422.297: 2.0833% ( 77) 00:20:50.154 8422.297 - 8474.937: 2.8125% ( 91) 00:20:50.154 8474.937 - 8527.576: 3.7099% ( 112) 00:20:50.154 8527.576 - 8580.215: 4.8478% ( 142) 00:20:50.154 8580.215 - 8632.855: 6.2179% ( 171) 00:20:50.154 8632.855 - 8685.494: 7.9006% ( 210) 00:20:50.154 8685.494 - 8738.133: 9.8798% ( 247) 00:20:50.154 8738.133 - 8790.773: 12.1154% ( 279) 00:20:50.154 8790.773 - 8843.412: 14.4631% ( 293) 00:20:50.154 8843.412 - 8896.051: 16.8830% ( 302) 00:20:50.154 8896.051 - 8948.691: 19.3910% ( 313) 00:20:50.154 8948.691 - 9001.330: 21.9712% ( 322) 00:20:50.154 9001.330 - 9053.969: 24.5513% ( 322) 00:20:50.154 9053.969 - 9106.609: 27.1474% ( 324) 00:20:50.154 9106.609 - 9159.248: 29.7756% ( 328) 00:20:50.154 9159.248 - 9211.888: 32.4199% ( 330) 00:20:50.154 9211.888 - 9264.527: 35.0321% ( 326) 00:20:50.154 9264.527 - 9317.166: 37.5801% ( 318) 00:20:50.154 9317.166 - 9369.806: 40.0160% ( 304) 00:20:50.154 9369.806 - 9422.445: 42.3478% ( 291) 00:20:50.154 9422.445 - 9475.084: 44.6635% ( 289) 00:20:50.154 9475.084 - 9527.724: 46.6747% ( 251) 00:20:50.154 9527.724 - 9580.363: 48.4936% ( 227) 00:20:50.154 9580.363 - 9633.002: 50.3125% ( 227) 00:20:50.154 9633.002 - 9685.642: 51.9391% ( 203) 00:20:50.154 9685.642 - 9738.281: 53.4936% ( 194) 00:20:50.154 9738.281 - 9790.920: 55.0561% ( 195) 00:20:50.154 9790.920 - 9843.560: 56.7067% ( 206) 00:20:50.155 9843.560 - 9896.199: 58.3253% ( 202) 00:20:50.155 9896.199 - 9948.839: 59.8478% ( 190) 00:20:50.155 9948.839 - 10001.478: 61.2099% ( 170) 00:20:50.155 10001.478 - 10054.117: 62.5321% ( 165) 00:20:50.155 10054.117 - 10106.757: 63.7660% ( 154) 00:20:50.155 10106.757 - 10159.396: 65.1362% ( 171) 00:20:50.155 10159.396 - 10212.035: 66.5385% ( 175) 00:20:50.155 10212.035 - 10264.675: 67.9888% ( 181) 00:20:50.155 10264.675 - 10317.314: 69.4311% ( 180) 00:20:50.155 10317.314 - 10369.953: 70.9455% ( 189) 00:20:50.155 10369.953 - 10422.593: 72.3397% ( 174) 00:20:50.155 10422.593 - 10475.232: 73.7981% ( 182) 00:20:50.155 10475.232 - 10527.871: 75.2404% ( 180) 00:20:50.155 10527.871 - 10580.511: 76.6587% ( 177) 00:20:50.155 10580.511 - 10633.150: 77.9407% ( 160) 00:20:50.155 10633.150 - 10685.790: 79.2628% ( 165) 00:20:50.155 10685.790 - 10738.429: 80.6010% ( 167) 00:20:50.155 10738.429 - 10791.068: 81.8349% ( 154) 00:20:50.155 10791.068 - 10843.708: 83.0609% ( 153) 00:20:50.155 10843.708 - 10896.347: 84.1667% ( 138) 00:20:50.155 10896.347 - 10948.986: 85.2804% ( 139) 00:20:50.155 10948.986 - 11001.626: 86.3381% ( 132) 00:20:50.155 11001.626 - 11054.265: 87.3478% ( 126) 00:20:50.155 11054.265 - 11106.904: 88.2612% ( 114) 00:20:50.155 11106.904 - 11159.544: 89.0785% ( 102) 00:20:50.155 11159.544 - 11212.183: 89.8157% ( 92) 00:20:50.155 11212.183 - 11264.822: 90.4888% ( 84) 00:20:50.155 11264.822 - 11317.462: 91.1138% ( 78) 00:20:50.155 11317.462 - 11370.101: 91.6426% ( 66) 00:20:50.155 11370.101 - 11422.741: 92.1234% ( 60) 00:20:50.155 11422.741 - 11475.380: 92.5080% ( 48) 00:20:50.155 11475.380 - 11528.019: 92.8846% ( 47) 00:20:50.155 11528.019 - 11580.659: 93.2131% ( 41) 00:20:50.155 11580.659 - 11633.298: 93.4215% ( 26) 00:20:50.155 11633.298 - 11685.937: 93.5577% ( 17) 00:20:50.155 11685.937 - 11738.577: 93.7019% ( 18) 00:20:50.155 11738.577 - 11791.216: 93.8221% ( 15) 00:20:50.155 11791.216 - 11843.855: 93.9022% ( 10) 00:20:50.155 11843.855 - 11896.495: 93.9984% ( 12) 00:20:50.155 11896.495 - 11949.134: 94.0785% ( 10) 00:20:50.155 11949.134 - 12001.773: 94.1346% ( 7) 00:20:50.155 12001.773 - 12054.413: 94.1827% ( 6) 00:20:50.155 12054.413 - 12107.052: 94.1987% ( 2) 00:20:50.155 12107.052 - 12159.692: 94.2388% ( 5) 00:20:50.155 12159.692 - 12212.331: 94.2869% ( 6) 00:20:50.155 12212.331 - 12264.970: 94.3429% ( 7) 00:20:50.155 12264.970 - 12317.610: 94.4151% ( 9) 00:20:50.155 12317.610 - 12370.249: 94.4952% ( 10) 00:20:50.155 12370.249 - 12422.888: 94.5753% ( 10) 00:20:50.155 12422.888 - 12475.528: 94.6635% ( 11) 00:20:50.155 12475.528 - 12528.167: 94.7356% ( 9) 00:20:50.155 12528.167 - 12580.806: 94.8157% ( 10) 00:20:50.155 12580.806 - 12633.446: 94.8958% ( 10) 00:20:50.155 12633.446 - 12686.085: 94.9679% ( 9) 00:20:50.155 12686.085 - 12738.724: 95.0481% ( 10) 00:20:50.155 12738.724 - 12791.364: 95.1282% ( 10) 00:20:50.155 12791.364 - 12844.003: 95.2083% ( 10) 00:20:50.155 12844.003 - 12896.643: 95.2885% ( 10) 00:20:50.155 12896.643 - 12949.282: 95.3606% ( 9) 00:20:50.155 12949.282 - 13001.921: 95.4327% ( 9) 00:20:50.155 13001.921 - 13054.561: 95.5048% ( 9) 00:20:50.155 13054.561 - 13107.200: 95.5849% ( 10) 00:20:50.155 13107.200 - 13159.839: 95.6731% ( 11) 00:20:50.155 13159.839 - 13212.479: 95.7532% ( 10) 00:20:50.155 13212.479 - 13265.118: 95.8093% ( 7) 00:20:50.155 13265.118 - 13317.757: 95.8734% ( 8) 00:20:50.155 13317.757 - 13370.397: 95.9135% ( 5) 00:20:50.155 13370.397 - 13423.036: 95.9535% ( 5) 00:20:50.155 13423.036 - 13475.676: 96.0016% ( 6) 00:20:50.155 13475.676 - 13580.954: 96.0978% ( 12) 00:20:50.155 13580.954 - 13686.233: 96.1939% ( 12) 00:20:50.155 13686.233 - 13791.512: 96.2821% ( 11) 00:20:50.155 13791.512 - 13896.790: 96.3862% ( 13) 00:20:50.155 13896.790 - 14002.069: 96.4904% ( 13) 00:20:50.155 14002.069 - 14107.348: 96.5545% ( 8) 00:20:50.155 14107.348 - 14212.627: 96.6346% ( 10) 00:20:50.155 14212.627 - 14317.905: 96.7388% ( 13) 00:20:50.155 14317.905 - 14423.184: 96.8189% ( 10) 00:20:50.155 14423.184 - 14528.463: 96.8990% ( 10) 00:20:50.155 14528.463 - 14633.741: 96.9792% ( 10) 00:20:50.155 14633.741 - 14739.020: 97.0593% ( 10) 00:20:50.155 14739.020 - 14844.299: 97.1394% ( 10) 00:20:50.155 14844.299 - 14949.578: 97.2196% ( 10) 00:20:50.155 14949.578 - 15054.856: 97.2917% ( 9) 00:20:50.155 15054.856 - 15160.135: 97.3397% ( 6) 00:20:50.155 15160.135 - 15265.414: 97.3878% ( 6) 00:20:50.155 15265.414 - 15370.692: 97.4359% ( 6) 00:20:50.155 16423.480 - 16528.758: 97.4439% ( 1) 00:20:50.155 16528.758 - 16634.037: 97.4599% ( 2) 00:20:50.155 16634.037 - 16739.316: 97.4840% ( 3) 00:20:50.155 16739.316 - 16844.594: 97.5080% ( 3) 00:20:50.155 16844.594 - 16949.873: 97.5401% ( 4) 00:20:50.155 16949.873 - 17055.152: 97.5641% ( 3) 00:20:50.155 17055.152 - 17160.431: 97.5881% ( 3) 00:20:50.155 17160.431 - 17265.709: 97.6122% ( 3) 00:20:50.155 17265.709 - 17370.988: 97.6683% ( 7) 00:20:50.155 17370.988 - 17476.267: 97.7244% ( 7) 00:20:50.155 17476.267 - 17581.545: 97.7724% ( 6) 00:20:50.155 17581.545 - 17686.824: 97.8285% ( 7) 00:20:50.155 17686.824 - 17792.103: 97.8926% ( 8) 00:20:50.155 17792.103 - 17897.382: 97.9407% ( 6) 00:20:50.155 17897.382 - 18002.660: 97.9968% ( 7) 00:20:50.155 18002.660 - 18107.939: 98.0529% ( 7) 00:20:50.155 18107.939 - 18213.218: 98.1170% ( 8) 00:20:50.155 18213.218 - 18318.496: 98.1651% ( 6) 00:20:50.155 18318.496 - 18423.775: 98.2212% ( 7) 00:20:50.155 18423.775 - 18529.054: 98.2772% ( 7) 00:20:50.155 18529.054 - 18634.333: 98.3574% ( 10) 00:20:50.155 18634.333 - 18739.611: 98.4135% ( 7) 00:20:50.155 18739.611 - 18844.890: 98.4696% ( 7) 00:20:50.155 18844.890 - 18950.169: 98.5256% ( 7) 00:20:50.155 18950.169 - 19055.447: 98.5737% ( 6) 00:20:50.155 19055.447 - 19160.726: 98.6058% ( 4) 00:20:50.155 19160.726 - 19266.005: 98.6378% ( 4) 00:20:50.155 19266.005 - 19371.284: 98.6619% ( 3) 00:20:50.155 19371.284 - 19476.562: 98.6859% ( 3) 00:20:50.155 19476.562 - 19581.841: 98.7099% ( 3) 00:20:50.155 19581.841 - 19687.120: 98.7340% ( 3) 00:20:50.155 19687.120 - 19792.398: 98.7660% ( 4) 00:20:50.155 19792.398 - 19897.677: 98.7821% ( 2) 00:20:50.155 19897.677 - 20002.956: 98.8061% ( 3) 00:20:50.155 20002.956 - 20108.235: 98.8381% ( 4) 00:20:50.155 20108.235 - 20213.513: 98.8622% ( 3) 00:20:50.155 20213.513 - 20318.792: 98.8782% ( 2) 00:20:50.155 20318.792 - 20424.071: 98.9103% ( 4) 00:20:50.155 20424.071 - 20529.349: 98.9343% ( 3) 00:20:50.155 20529.349 - 20634.628: 98.9583% ( 3) 00:20:50.155 20634.628 - 20739.907: 98.9744% ( 2) 00:20:50.155 32425.844 - 32636.402: 99.0064% ( 4) 00:20:50.155 32636.402 - 32846.959: 99.0465% ( 5) 00:20:50.155 32846.959 - 33057.516: 99.1026% ( 7) 00:20:50.155 33057.516 - 33268.074: 99.1426% ( 5) 00:20:50.155 33268.074 - 33478.631: 99.1907% ( 6) 00:20:50.155 33478.631 - 33689.189: 99.2308% ( 5) 00:20:50.155 33689.189 - 33899.746: 99.2788% ( 6) 00:20:50.155 33899.746 - 34110.304: 99.3269% ( 6) 00:20:50.155 34110.304 - 34320.861: 99.3670% ( 5) 00:20:50.155 34320.861 - 34531.418: 99.4151% ( 6) 00:20:50.155 34531.418 - 34741.976: 99.4631% ( 6) 00:20:50.155 34741.976 - 34952.533: 99.4872% ( 3) 00:20:50.155 40005.912 - 40216.469: 99.5272% ( 5) 00:20:50.155 40216.469 - 40427.027: 99.5673% ( 5) 00:20:50.155 40427.027 - 40637.584: 99.6154% ( 6) 00:20:50.155 40637.584 - 40848.141: 99.6474% ( 4) 00:20:50.155 40848.141 - 41058.699: 99.6955% ( 6) 00:20:50.155 41058.699 - 41269.256: 99.7436% ( 6) 00:20:50.155 41269.256 - 41479.814: 99.7837% ( 5) 00:20:50.155 41479.814 - 41690.371: 99.8317% ( 6) 00:20:50.155 41690.371 - 41900.929: 99.8878% ( 7) 00:20:50.155 41900.929 - 42111.486: 99.9359% ( 6) 00:20:50.155 42111.486 - 42322.043: 99.9840% ( 6) 00:20:50.155 42322.043 - 42532.601: 100.0000% ( 2) 00:20:50.155 00:20:50.155 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:20:50.155 ============================================================================== 00:20:50.155 Range in us Cumulative IO count 00:20:50.155 8053.822 - 8106.461: 0.0080% ( 1) 00:20:50.155 8106.461 - 8159.100: 0.1282% ( 15) 00:20:50.155 8159.100 - 8211.740: 0.3205% ( 24) 00:20:50.155 8211.740 - 8264.379: 0.6170% ( 37) 00:20:50.155 8264.379 - 8317.018: 1.0256% ( 51) 00:20:50.155 8317.018 - 8369.658: 1.4663% ( 55) 00:20:50.155 8369.658 - 8422.297: 2.0593% ( 74) 00:20:50.155 8422.297 - 8474.937: 2.7484% ( 86) 00:20:50.155 8474.937 - 8527.576: 3.6779% ( 116) 00:20:50.155 8527.576 - 8580.215: 4.7035% ( 128) 00:20:50.155 8580.215 - 8632.855: 5.9135% ( 151) 00:20:50.155 8632.855 - 8685.494: 7.4439% ( 191) 00:20:50.155 8685.494 - 8738.133: 9.4551% ( 251) 00:20:50.155 8738.133 - 8790.773: 11.6426% ( 273) 00:20:50.155 8790.773 - 8843.412: 14.0064% ( 295) 00:20:50.155 8843.412 - 8896.051: 16.4904% ( 310) 00:20:50.155 8896.051 - 8948.691: 19.0865% ( 324) 00:20:50.155 8948.691 - 9001.330: 21.7308% ( 330) 00:20:50.155 9001.330 - 9053.969: 24.4391% ( 338) 00:20:50.155 9053.969 - 9106.609: 27.0192% ( 322) 00:20:50.155 9106.609 - 9159.248: 29.5433% ( 315) 00:20:50.155 9159.248 - 9211.888: 32.1795% ( 329) 00:20:50.155 9211.888 - 9264.527: 34.7516% ( 321) 00:20:50.155 9264.527 - 9317.166: 37.3558% ( 325) 00:20:50.155 9317.166 - 9369.806: 39.8558% ( 312) 00:20:50.155 9369.806 - 9422.445: 42.3077% ( 306) 00:20:50.155 9422.445 - 9475.084: 44.5833% ( 284) 00:20:50.156 9475.084 - 9527.724: 46.7468% ( 270) 00:20:50.156 9527.724 - 9580.363: 48.6779% ( 241) 00:20:50.156 9580.363 - 9633.002: 50.4087% ( 216) 00:20:50.156 9633.002 - 9685.642: 51.9551% ( 193) 00:20:50.156 9685.642 - 9738.281: 53.5817% ( 203) 00:20:50.156 9738.281 - 9790.920: 55.0401% ( 182) 00:20:50.156 9790.920 - 9843.560: 56.5144% ( 184) 00:20:50.156 9843.560 - 9896.199: 58.0048% ( 186) 00:20:50.156 9896.199 - 9948.839: 59.5032% ( 187) 00:20:50.156 9948.839 - 10001.478: 60.9696% ( 183) 00:20:50.156 10001.478 - 10054.117: 62.4199% ( 181) 00:20:50.156 10054.117 - 10106.757: 63.8622% ( 180) 00:20:50.156 10106.757 - 10159.396: 65.3446% ( 185) 00:20:50.156 10159.396 - 10212.035: 66.8189% ( 184) 00:20:50.156 10212.035 - 10264.675: 68.3494% ( 191) 00:20:50.156 10264.675 - 10317.314: 69.8718% ( 190) 00:20:50.156 10317.314 - 10369.953: 71.3301% ( 182) 00:20:50.156 10369.953 - 10422.593: 72.8446% ( 189) 00:20:50.156 10422.593 - 10475.232: 74.2468% ( 175) 00:20:50.156 10475.232 - 10527.871: 75.5929% ( 168) 00:20:50.156 10527.871 - 10580.511: 76.8670% ( 159) 00:20:50.156 10580.511 - 10633.150: 78.0769% ( 151) 00:20:50.156 10633.150 - 10685.790: 79.3670% ( 161) 00:20:50.156 10685.790 - 10738.429: 80.6891% ( 165) 00:20:50.156 10738.429 - 10791.068: 81.8990% ( 151) 00:20:50.156 10791.068 - 10843.708: 83.0609% ( 145) 00:20:50.156 10843.708 - 10896.347: 84.1426% ( 135) 00:20:50.156 10896.347 - 10948.986: 85.0962% ( 119) 00:20:50.156 10948.986 - 11001.626: 86.0657% ( 121) 00:20:50.156 11001.626 - 11054.265: 86.9712% ( 113) 00:20:50.156 11054.265 - 11106.904: 87.7564% ( 98) 00:20:50.156 11106.904 - 11159.544: 88.4936% ( 92) 00:20:50.156 11159.544 - 11212.183: 89.1667% ( 84) 00:20:50.156 11212.183 - 11264.822: 89.8077% ( 80) 00:20:50.156 11264.822 - 11317.462: 90.3766% ( 71) 00:20:50.156 11317.462 - 11370.101: 90.9135% ( 67) 00:20:50.156 11370.101 - 11422.741: 91.3942% ( 60) 00:20:50.156 11422.741 - 11475.380: 91.8590% ( 58) 00:20:50.156 11475.380 - 11528.019: 92.2837% ( 53) 00:20:50.156 11528.019 - 11580.659: 92.6603% ( 47) 00:20:50.156 11580.659 - 11633.298: 92.9247% ( 33) 00:20:50.156 11633.298 - 11685.937: 93.1651% ( 30) 00:20:50.156 11685.937 - 11738.577: 93.3734% ( 26) 00:20:50.156 11738.577 - 11791.216: 93.5577% ( 23) 00:20:50.156 11791.216 - 11843.855: 93.7099% ( 19) 00:20:50.156 11843.855 - 11896.495: 93.8542% ( 18) 00:20:50.156 11896.495 - 11949.134: 94.0224% ( 21) 00:20:50.156 11949.134 - 12001.773: 94.1506% ( 16) 00:20:50.156 12001.773 - 12054.413: 94.2468% ( 12) 00:20:50.156 12054.413 - 12107.052: 94.3510% ( 13) 00:20:50.156 12107.052 - 12159.692: 94.4471% ( 12) 00:20:50.156 12159.692 - 12212.331: 94.5192% ( 9) 00:20:50.156 12212.331 - 12264.970: 94.5673% ( 6) 00:20:50.156 12264.970 - 12317.610: 94.6394% ( 9) 00:20:50.156 12317.610 - 12370.249: 94.6955% ( 7) 00:20:50.156 12370.249 - 12422.888: 94.7516% ( 7) 00:20:50.156 12422.888 - 12475.528: 94.7837% ( 4) 00:20:50.156 12475.528 - 12528.167: 94.7997% ( 2) 00:20:50.156 12528.167 - 12580.806: 94.8237% ( 3) 00:20:50.156 12580.806 - 12633.446: 94.8478% ( 3) 00:20:50.156 12633.446 - 12686.085: 94.8718% ( 3) 00:20:50.156 12738.724 - 12791.364: 94.9038% ( 4) 00:20:50.156 12791.364 - 12844.003: 94.9439% ( 5) 00:20:50.156 12844.003 - 12896.643: 94.9920% ( 6) 00:20:50.156 12896.643 - 12949.282: 95.0321% ( 5) 00:20:50.156 12949.282 - 13001.921: 95.0801% ( 6) 00:20:50.156 13001.921 - 13054.561: 95.1362% ( 7) 00:20:50.156 13054.561 - 13107.200: 95.2083% ( 9) 00:20:50.156 13107.200 - 13159.839: 95.2724% ( 8) 00:20:50.156 13159.839 - 13212.479: 95.3446% ( 9) 00:20:50.156 13212.479 - 13265.118: 95.4087% ( 8) 00:20:50.156 13265.118 - 13317.757: 95.4888% ( 10) 00:20:50.156 13317.757 - 13370.397: 95.5529% ( 8) 00:20:50.156 13370.397 - 13423.036: 95.6330% ( 10) 00:20:50.156 13423.036 - 13475.676: 95.7051% ( 9) 00:20:50.156 13475.676 - 13580.954: 95.8574% ( 19) 00:20:50.156 13580.954 - 13686.233: 96.0417% ( 23) 00:20:50.156 13686.233 - 13791.512: 96.2500% ( 26) 00:20:50.156 13791.512 - 13896.790: 96.4183% ( 21) 00:20:50.156 13896.790 - 14002.069: 96.5625% ( 18) 00:20:50.156 14002.069 - 14107.348: 96.7067% ( 18) 00:20:50.156 14107.348 - 14212.627: 96.8429% ( 17) 00:20:50.156 14212.627 - 14317.905: 96.9712% ( 16) 00:20:50.156 14317.905 - 14423.184: 97.1074% ( 17) 00:20:50.156 14423.184 - 14528.463: 97.2276% ( 15) 00:20:50.156 14528.463 - 14633.741: 97.3397% ( 14) 00:20:50.156 14633.741 - 14739.020: 97.4279% ( 11) 00:20:50.156 14739.020 - 14844.299: 97.4359% ( 1) 00:20:50.156 16528.758 - 16634.037: 97.4439% ( 1) 00:20:50.156 16634.037 - 16739.316: 97.4760% ( 4) 00:20:50.156 16739.316 - 16844.594: 97.5080% ( 4) 00:20:50.156 16844.594 - 16949.873: 97.5641% ( 7) 00:20:50.156 16949.873 - 17055.152: 97.6202% ( 7) 00:20:50.156 17055.152 - 17160.431: 97.6763% ( 7) 00:20:50.156 17160.431 - 17265.709: 97.7324% ( 7) 00:20:50.156 17265.709 - 17370.988: 97.7885% ( 7) 00:20:50.156 17370.988 - 17476.267: 97.8446% ( 7) 00:20:50.156 17476.267 - 17581.545: 97.9006% ( 7) 00:20:50.156 17581.545 - 17686.824: 97.9487% ( 6) 00:20:50.156 17686.824 - 17792.103: 97.9968% ( 6) 00:20:50.156 17792.103 - 17897.382: 98.0449% ( 6) 00:20:50.156 17897.382 - 18002.660: 98.1010% ( 7) 00:20:50.156 18002.660 - 18107.939: 98.1571% ( 7) 00:20:50.156 18107.939 - 18213.218: 98.1971% ( 5) 00:20:50.156 18213.218 - 18318.496: 98.2532% ( 7) 00:20:50.156 18318.496 - 18423.775: 98.3013% ( 6) 00:20:50.156 18423.775 - 18529.054: 98.3654% ( 8) 00:20:50.156 18529.054 - 18634.333: 98.4135% ( 6) 00:20:50.156 18634.333 - 18739.611: 98.4535% ( 5) 00:20:50.156 18739.611 - 18844.890: 98.4936% ( 5) 00:20:50.156 18844.890 - 18950.169: 98.5176% ( 3) 00:20:50.156 18950.169 - 19055.447: 98.5417% ( 3) 00:20:50.156 19055.447 - 19160.726: 98.5657% ( 3) 00:20:50.156 19160.726 - 19266.005: 98.5978% ( 4) 00:20:50.156 19266.005 - 19371.284: 98.6218% ( 3) 00:20:50.156 19371.284 - 19476.562: 98.6458% ( 3) 00:20:50.156 19476.562 - 19581.841: 98.6699% ( 3) 00:20:50.156 19581.841 - 19687.120: 98.6939% ( 3) 00:20:50.156 19687.120 - 19792.398: 98.7179% ( 3) 00:20:50.156 19792.398 - 19897.677: 98.7500% ( 4) 00:20:50.156 19897.677 - 20002.956: 98.7740% ( 3) 00:20:50.156 20002.956 - 20108.235: 98.7981% ( 3) 00:20:50.156 20108.235 - 20213.513: 98.8221% ( 3) 00:20:50.156 20213.513 - 20318.792: 98.8462% ( 3) 00:20:50.156 20318.792 - 20424.071: 98.8782% ( 4) 00:20:50.156 20424.071 - 20529.349: 98.8942% ( 2) 00:20:50.156 20529.349 - 20634.628: 98.9263% ( 4) 00:20:50.156 20634.628 - 20739.907: 98.9503% ( 3) 00:20:50.156 20739.907 - 20845.186: 98.9744% ( 3) 00:20:50.156 29899.155 - 30109.712: 98.9984% ( 3) 00:20:50.156 30109.712 - 30320.270: 99.0385% ( 5) 00:20:50.156 30320.270 - 30530.827: 99.0865% ( 6) 00:20:50.156 30530.827 - 30741.385: 99.1346% ( 6) 00:20:50.156 30741.385 - 30951.942: 99.1907% ( 7) 00:20:50.156 30951.942 - 31162.500: 99.2308% ( 5) 00:20:50.156 31162.500 - 31373.057: 99.2788% ( 6) 00:20:50.156 31373.057 - 31583.614: 99.3349% ( 7) 00:20:50.156 31583.614 - 31794.172: 99.3830% ( 6) 00:20:50.156 31794.172 - 32004.729: 99.4231% ( 5) 00:20:50.156 32004.729 - 32215.287: 99.4712% ( 6) 00:20:50.156 32215.287 - 32425.844: 99.4872% ( 2) 00:20:50.156 37268.665 - 37479.222: 99.5032% ( 2) 00:20:50.156 37479.222 - 37689.780: 99.5513% ( 6) 00:20:50.156 37689.780 - 37900.337: 99.5994% ( 6) 00:20:50.156 37900.337 - 38110.895: 99.6474% ( 6) 00:20:50.156 38110.895 - 38321.452: 99.6955% ( 6) 00:20:50.156 38321.452 - 38532.010: 99.7436% ( 6) 00:20:50.156 38532.010 - 38742.567: 99.7917% ( 6) 00:20:50.156 38742.567 - 38953.124: 99.8397% ( 6) 00:20:50.156 38953.124 - 39163.682: 99.8878% ( 6) 00:20:50.156 39163.682 - 39374.239: 99.9359% ( 6) 00:20:50.156 39374.239 - 39584.797: 99.9840% ( 6) 00:20:50.156 39584.797 - 39795.354: 100.0000% ( 2) 00:20:50.156 00:20:50.156 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:20:50.157 ============================================================================== 00:20:50.157 Range in us Cumulative IO count 00:20:50.157 8106.461 - 8159.100: 0.0481% ( 6) 00:20:50.157 8159.100 - 8211.740: 0.2404% ( 24) 00:20:50.157 8211.740 - 8264.379: 0.5208% ( 35) 00:20:50.157 8264.379 - 8317.018: 0.9375% ( 52) 00:20:50.157 8317.018 - 8369.658: 1.4423% ( 63) 00:20:50.157 8369.658 - 8422.297: 2.0433% ( 75) 00:20:50.157 8422.297 - 8474.937: 2.7724% ( 91) 00:20:50.157 8474.937 - 8527.576: 3.6538% ( 110) 00:20:50.157 8527.576 - 8580.215: 4.6875% ( 129) 00:20:50.157 8580.215 - 8632.855: 5.9535% ( 158) 00:20:50.157 8632.855 - 8685.494: 7.3958% ( 180) 00:20:50.157 8685.494 - 8738.133: 9.4151% ( 252) 00:20:50.157 8738.133 - 8790.773: 11.5625% ( 268) 00:20:50.157 8790.773 - 8843.412: 14.0304% ( 308) 00:20:50.157 8843.412 - 8896.051: 16.5064% ( 309) 00:20:50.157 8896.051 - 8948.691: 19.1587% ( 331) 00:20:50.157 8948.691 - 9001.330: 21.8189% ( 332) 00:20:50.157 9001.330 - 9053.969: 24.5433% ( 340) 00:20:50.157 9053.969 - 9106.609: 27.1314% ( 323) 00:20:50.157 9106.609 - 9159.248: 29.7196% ( 323) 00:20:50.157 9159.248 - 9211.888: 32.3157% ( 324) 00:20:50.157 9211.888 - 9264.527: 34.8397% ( 315) 00:20:50.157 9264.527 - 9317.166: 37.3478% ( 313) 00:20:50.157 9317.166 - 9369.806: 39.8638% ( 314) 00:20:50.157 9369.806 - 9422.445: 42.3237% ( 307) 00:20:50.157 9422.445 - 9475.084: 44.5673% ( 280) 00:20:50.157 9475.084 - 9527.724: 46.8029% ( 279) 00:20:50.157 9527.724 - 9580.363: 48.8061% ( 250) 00:20:50.157 9580.363 - 9633.002: 50.4327% ( 203) 00:20:50.157 9633.002 - 9685.642: 51.9471% ( 189) 00:20:50.157 9685.642 - 9738.281: 53.4615% ( 189) 00:20:50.157 9738.281 - 9790.920: 54.8638% ( 175) 00:20:50.157 9790.920 - 9843.560: 56.2821% ( 177) 00:20:50.157 9843.560 - 9896.199: 57.7324% ( 181) 00:20:50.157 9896.199 - 9948.839: 59.1907% ( 182) 00:20:50.157 9948.839 - 10001.478: 60.7372% ( 193) 00:20:50.157 10001.478 - 10054.117: 62.1795% ( 180) 00:20:50.157 10054.117 - 10106.757: 63.7260% ( 193) 00:20:50.157 10106.757 - 10159.396: 65.2083% ( 185) 00:20:50.157 10159.396 - 10212.035: 66.6587% ( 181) 00:20:50.157 10212.035 - 10264.675: 68.1571% ( 187) 00:20:50.157 10264.675 - 10317.314: 69.5913% ( 179) 00:20:50.157 10317.314 - 10369.953: 71.0897% ( 187) 00:20:50.157 10369.953 - 10422.593: 72.5000% ( 176) 00:20:50.157 10422.593 - 10475.232: 73.8702% ( 171) 00:20:50.157 10475.232 - 10527.871: 75.2484% ( 172) 00:20:50.157 10527.871 - 10580.511: 76.6346% ( 173) 00:20:50.157 10580.511 - 10633.150: 77.9888% ( 169) 00:20:50.157 10633.150 - 10685.790: 79.3029% ( 164) 00:20:50.157 10685.790 - 10738.429: 80.6090% ( 163) 00:20:50.157 10738.429 - 10791.068: 81.8510% ( 155) 00:20:50.157 10791.068 - 10843.708: 83.0609% ( 151) 00:20:50.157 10843.708 - 10896.347: 84.1987% ( 142) 00:20:50.157 10896.347 - 10948.986: 85.2404% ( 130) 00:20:50.157 10948.986 - 11001.626: 86.1619% ( 115) 00:20:50.157 11001.626 - 11054.265: 87.0833% ( 115) 00:20:50.157 11054.265 - 11106.904: 87.8766% ( 99) 00:20:50.157 11106.904 - 11159.544: 88.6058% ( 91) 00:20:50.157 11159.544 - 11212.183: 89.2869% ( 85) 00:20:50.157 11212.183 - 11264.822: 89.9119% ( 78) 00:20:50.157 11264.822 - 11317.462: 90.5128% ( 75) 00:20:50.157 11317.462 - 11370.101: 91.0817% ( 71) 00:20:50.157 11370.101 - 11422.741: 91.6186% ( 67) 00:20:50.157 11422.741 - 11475.380: 92.0593% ( 55) 00:20:50.157 11475.380 - 11528.019: 92.4679% ( 51) 00:20:50.157 11528.019 - 11580.659: 92.7644% ( 37) 00:20:50.157 11580.659 - 11633.298: 93.0288% ( 33) 00:20:50.157 11633.298 - 11685.937: 93.2131% ( 23) 00:20:50.157 11685.937 - 11738.577: 93.4054% ( 24) 00:20:50.157 11738.577 - 11791.216: 93.5897% ( 23) 00:20:50.157 11791.216 - 11843.855: 93.7019% ( 14) 00:20:50.157 11843.855 - 11896.495: 93.8061% ( 13) 00:20:50.157 11896.495 - 11949.134: 93.8942% ( 11) 00:20:50.157 11949.134 - 12001.773: 93.9984% ( 13) 00:20:50.157 12001.773 - 12054.413: 94.1026% ( 13) 00:20:50.157 12054.413 - 12107.052: 94.1907% ( 11) 00:20:50.157 12107.052 - 12159.692: 94.3029% ( 14) 00:20:50.157 12159.692 - 12212.331: 94.4231% ( 15) 00:20:50.157 12212.331 - 12264.970: 94.5272% ( 13) 00:20:50.157 12264.970 - 12317.610: 94.5994% ( 9) 00:20:50.157 12317.610 - 12370.249: 94.6875% ( 11) 00:20:50.157 12370.249 - 12422.888: 94.7436% ( 7) 00:20:50.157 12422.888 - 12475.528: 94.7997% ( 7) 00:20:50.157 12475.528 - 12528.167: 94.8397% ( 5) 00:20:50.157 12528.167 - 12580.806: 94.9038% ( 8) 00:20:50.157 12580.806 - 12633.446: 94.9599% ( 7) 00:20:50.157 12633.446 - 12686.085: 94.9920% ( 4) 00:20:50.157 12686.085 - 12738.724: 95.0240% ( 4) 00:20:50.157 12738.724 - 12791.364: 95.0561% ( 4) 00:20:50.157 12791.364 - 12844.003: 95.0881% ( 4) 00:20:50.157 12844.003 - 12896.643: 95.1042% ( 2) 00:20:50.157 12896.643 - 12949.282: 95.1282% ( 3) 00:20:50.157 12949.282 - 13001.921: 95.1603% ( 4) 00:20:50.157 13001.921 - 13054.561: 95.2003% ( 5) 00:20:50.157 13054.561 - 13107.200: 95.2484% ( 6) 00:20:50.157 13107.200 - 13159.839: 95.2885% ( 5) 00:20:50.157 13159.839 - 13212.479: 95.3285% ( 5) 00:20:50.157 13212.479 - 13265.118: 95.3686% ( 5) 00:20:50.157 13265.118 - 13317.757: 95.4087% ( 5) 00:20:50.157 13317.757 - 13370.397: 95.4407% ( 4) 00:20:50.157 13370.397 - 13423.036: 95.4808% ( 5) 00:20:50.157 13423.036 - 13475.676: 95.5208% ( 5) 00:20:50.157 13475.676 - 13580.954: 95.6651% ( 18) 00:20:50.157 13580.954 - 13686.233: 95.8574% ( 24) 00:20:50.157 13686.233 - 13791.512: 96.0337% ( 22) 00:20:50.157 13791.512 - 13896.790: 96.1779% ( 18) 00:20:50.157 13896.790 - 14002.069: 96.3381% ( 20) 00:20:50.157 14002.069 - 14107.348: 96.4904% ( 19) 00:20:50.157 14107.348 - 14212.627: 96.6266% ( 17) 00:20:50.157 14212.627 - 14317.905: 96.7468% ( 15) 00:20:50.157 14317.905 - 14423.184: 96.8590% ( 14) 00:20:50.157 14423.184 - 14528.463: 96.9631% ( 13) 00:20:50.157 14528.463 - 14633.741: 97.0753% ( 14) 00:20:50.157 14633.741 - 14739.020: 97.1635% ( 11) 00:20:50.157 14739.020 - 14844.299: 97.2356% ( 9) 00:20:50.157 14844.299 - 14949.578: 97.2917% ( 7) 00:20:50.157 14949.578 - 15054.856: 97.3558% ( 8) 00:20:50.157 15054.856 - 15160.135: 97.3958% ( 5) 00:20:50.157 15160.135 - 15265.414: 97.4359% ( 5) 00:20:50.157 16107.643 - 16212.922: 97.4439% ( 1) 00:20:50.157 16212.922 - 16318.201: 97.4679% ( 3) 00:20:50.157 16318.201 - 16423.480: 97.5000% ( 4) 00:20:50.157 16423.480 - 16528.758: 97.5321% ( 4) 00:20:50.157 16528.758 - 16634.037: 97.5641% ( 4) 00:20:50.157 16634.037 - 16739.316: 97.5881% ( 3) 00:20:50.157 16739.316 - 16844.594: 97.6202% ( 4) 00:20:50.157 16844.594 - 16949.873: 97.6763% ( 7) 00:20:50.157 16949.873 - 17055.152: 97.7324% ( 7) 00:20:50.157 17055.152 - 17160.431: 97.7885% ( 7) 00:20:50.157 17160.431 - 17265.709: 97.8446% ( 7) 00:20:50.157 17265.709 - 17370.988: 97.8846% ( 5) 00:20:50.157 17370.988 - 17476.267: 97.9407% ( 7) 00:20:50.157 17476.267 - 17581.545: 98.0048% ( 8) 00:20:50.157 17581.545 - 17686.824: 98.0529% ( 6) 00:20:50.157 17686.824 - 17792.103: 98.1170% ( 8) 00:20:50.157 17792.103 - 17897.382: 98.1651% ( 6) 00:20:50.157 17897.382 - 18002.660: 98.2131% ( 6) 00:20:50.157 18002.660 - 18107.939: 98.2452% ( 4) 00:20:50.157 18107.939 - 18213.218: 98.2692% ( 3) 00:20:50.157 18213.218 - 18318.496: 98.2933% ( 3) 00:20:50.157 18318.496 - 18423.775: 98.3173% ( 3) 00:20:50.157 18423.775 - 18529.054: 98.3413% ( 3) 00:20:50.157 18529.054 - 18634.333: 98.3734% ( 4) 00:20:50.157 18634.333 - 18739.611: 98.3974% ( 3) 00:20:50.157 18739.611 - 18844.890: 98.4215% ( 3) 00:20:50.157 18844.890 - 18950.169: 98.4696% ( 6) 00:20:50.157 18950.169 - 19055.447: 98.5096% ( 5) 00:20:50.157 19055.447 - 19160.726: 98.5337% ( 3) 00:20:50.157 19160.726 - 19266.005: 98.5657% ( 4) 00:20:50.157 19266.005 - 19371.284: 98.5817% ( 2) 00:20:50.157 19371.284 - 19476.562: 98.6058% ( 3) 00:20:50.157 19476.562 - 19581.841: 98.6298% ( 3) 00:20:50.157 19581.841 - 19687.120: 98.6538% ( 3) 00:20:50.157 19687.120 - 19792.398: 98.6779% ( 3) 00:20:50.157 19792.398 - 19897.677: 98.7099% ( 4) 00:20:50.158 19897.677 - 20002.956: 98.7340% ( 3) 00:20:50.158 20002.956 - 20108.235: 98.7580% ( 3) 00:20:50.158 20108.235 - 20213.513: 98.7821% ( 3) 00:20:50.158 20213.513 - 20318.792: 98.8141% ( 4) 00:20:50.158 20318.792 - 20424.071: 98.8381% ( 3) 00:20:50.158 20424.071 - 20529.349: 98.8622% ( 3) 00:20:50.158 20529.349 - 20634.628: 98.8862% ( 3) 00:20:50.158 20634.628 - 20739.907: 98.9183% ( 4) 00:20:50.158 20739.907 - 20845.186: 98.9423% ( 3) 00:20:50.158 20845.186 - 20950.464: 98.9663% ( 3) 00:20:50.158 20950.464 - 21055.743: 98.9744% ( 1) 00:20:50.158 27161.908 - 27372.466: 98.9904% ( 2) 00:20:50.158 27372.466 - 27583.023: 99.0385% ( 6) 00:20:50.158 27583.023 - 27793.581: 99.0946% ( 7) 00:20:50.158 27793.581 - 28004.138: 99.1426% ( 6) 00:20:50.158 28004.138 - 28214.696: 99.1907% ( 6) 00:20:50.158 28214.696 - 28425.253: 99.2388% ( 6) 00:20:50.158 28425.253 - 28635.810: 99.2949% ( 7) 00:20:50.158 28635.810 - 28846.368: 99.3429% ( 6) 00:20:50.158 28846.368 - 29056.925: 99.3990% ( 7) 00:20:50.158 29056.925 - 29267.483: 99.4471% ( 6) 00:20:50.158 29267.483 - 29478.040: 99.4872% ( 5) 00:20:50.158 34531.418 - 34741.976: 99.5112% ( 3) 00:20:50.158 34741.976 - 34952.533: 99.5593% ( 6) 00:20:50.158 34952.533 - 35163.091: 99.6154% ( 7) 00:20:50.158 35373.648 - 35584.206: 99.6635% ( 6) 00:20:50.158 35584.206 - 35794.763: 99.7196% ( 7) 00:20:50.158 35794.763 - 36005.320: 99.7596% ( 5) 00:20:50.158 36005.320 - 36215.878: 99.8077% ( 6) 00:20:50.158 36215.878 - 36426.435: 99.8157% ( 1) 00:20:50.158 36426.435 - 36636.993: 99.8638% ( 6) 00:20:50.158 36636.993 - 36847.550: 99.9199% ( 7) 00:20:50.158 36847.550 - 37058.108: 99.9679% ( 6) 00:20:50.158 37058.108 - 37268.665: 100.0000% ( 4) 00:20:50.158 00:20:50.158 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:20:50.158 ============================================================================== 00:20:50.158 Range in us Cumulative IO count 00:20:50.158 8053.822 - 8106.461: 0.0080% ( 1) 00:20:50.158 8106.461 - 8159.100: 0.0877% ( 10) 00:20:50.158 8159.100 - 8211.740: 0.2631% ( 22) 00:20:50.158 8211.740 - 8264.379: 0.5341% ( 34) 00:20:50.158 8264.379 - 8317.018: 0.9008% ( 46) 00:20:50.158 8317.018 - 8369.658: 1.3552% ( 57) 00:20:50.158 8369.658 - 8422.297: 1.9452% ( 74) 00:20:50.158 8422.297 - 8474.937: 2.6786% ( 92) 00:20:50.158 8474.937 - 8527.576: 3.5635% ( 111) 00:20:50.158 8527.576 - 8580.215: 4.6636% ( 138) 00:20:50.158 8580.215 - 8632.855: 6.0029% ( 168) 00:20:50.158 8632.855 - 8685.494: 7.6132% ( 202) 00:20:50.158 8685.494 - 8738.133: 9.7098% ( 263) 00:20:50.158 8738.133 - 8790.773: 11.9260% ( 278) 00:20:50.158 8790.773 - 8843.412: 14.3017% ( 298) 00:20:50.158 8843.412 - 8896.051: 16.6773% ( 298) 00:20:50.158 8896.051 - 8948.691: 19.2443% ( 322) 00:20:50.158 8948.691 - 9001.330: 21.8272% ( 324) 00:20:50.158 9001.330 - 9053.969: 24.3702% ( 319) 00:20:50.158 9053.969 - 9106.609: 26.9212% ( 320) 00:20:50.158 9106.609 - 9159.248: 29.5121% ( 325) 00:20:50.158 9159.248 - 9211.888: 32.1588% ( 332) 00:20:50.158 9211.888 - 9264.527: 34.8294% ( 335) 00:20:50.158 9264.527 - 9317.166: 37.2848% ( 308) 00:20:50.158 9317.166 - 9369.806: 39.7321% ( 307) 00:20:50.158 9369.806 - 9422.445: 42.1556% ( 304) 00:20:50.158 9422.445 - 9475.084: 44.4196% ( 284) 00:20:50.158 9475.084 - 9527.724: 46.5163% ( 263) 00:20:50.158 9527.724 - 9580.363: 48.4216% ( 239) 00:20:50.158 9580.363 - 9633.002: 49.9681% ( 194) 00:20:50.158 9633.002 - 9685.642: 51.5625% ( 200) 00:20:50.158 9685.642 - 9738.281: 53.0373% ( 185) 00:20:50.158 9738.281 - 9790.920: 54.5839% ( 194) 00:20:50.158 9790.920 - 9843.560: 56.1783% ( 200) 00:20:50.158 9843.560 - 9896.199: 57.6690% ( 187) 00:20:50.158 9896.199 - 9948.839: 59.2235% ( 195) 00:20:50.158 9948.839 - 10001.478: 60.6824% ( 183) 00:20:50.158 10001.478 - 10054.117: 62.1333% ( 182) 00:20:50.158 10054.117 - 10106.757: 63.5284% ( 175) 00:20:50.158 10106.757 - 10159.396: 64.9394% ( 177) 00:20:50.158 10159.396 - 10212.035: 66.2628% ( 166) 00:20:50.158 10212.035 - 10264.675: 67.7136% ( 182) 00:20:50.158 10264.675 - 10317.314: 69.1486% ( 180) 00:20:50.158 10317.314 - 10369.953: 70.5038% ( 170) 00:20:50.158 10369.953 - 10422.593: 71.9627% ( 183) 00:20:50.158 10422.593 - 10475.232: 73.4614% ( 188) 00:20:50.158 10475.232 - 10527.871: 74.8406% ( 173) 00:20:50.158 10527.871 - 10580.511: 76.1719% ( 167) 00:20:50.158 10580.511 - 10633.150: 77.5191% ( 169) 00:20:50.158 10633.150 - 10685.790: 78.8186% ( 163) 00:20:50.158 10685.790 - 10738.429: 80.0303% ( 152) 00:20:50.158 10738.429 - 10791.068: 81.2819% ( 157) 00:20:50.158 10791.068 - 10843.708: 82.4857% ( 151) 00:20:50.158 10843.708 - 10896.347: 83.5698% ( 136) 00:20:50.158 10896.347 - 10948.986: 84.6221% ( 132) 00:20:50.158 10948.986 - 11001.626: 85.5309% ( 114) 00:20:50.158 11001.626 - 11054.265: 86.4477% ( 115) 00:20:50.158 11054.265 - 11106.904: 87.2608% ( 102) 00:20:50.158 11106.904 - 11159.544: 88.0022% ( 93) 00:20:50.158 11159.544 - 11212.183: 88.7117% ( 89) 00:20:50.158 11212.183 - 11264.822: 89.3495% ( 80) 00:20:50.158 11264.822 - 11317.462: 89.9474% ( 75) 00:20:50.158 11317.462 - 11370.101: 90.4974% ( 69) 00:20:50.158 11370.101 - 11422.741: 90.9279% ( 54) 00:20:50.158 11422.741 - 11475.380: 91.3664% ( 55) 00:20:50.158 11475.380 - 11528.019: 91.7251% ( 45) 00:20:50.158 11528.019 - 11580.659: 92.0281% ( 38) 00:20:50.158 11580.659 - 11633.298: 92.2672% ( 30) 00:20:50.158 11633.298 - 11685.937: 92.5064% ( 30) 00:20:50.158 11685.937 - 11738.577: 92.7296% ( 28) 00:20:50.158 11738.577 - 11791.216: 92.9847% ( 32) 00:20:50.158 11791.216 - 11843.855: 93.1601% ( 22) 00:20:50.158 11843.855 - 11896.495: 93.3275% ( 21) 00:20:50.158 11896.495 - 11949.134: 93.5029% ( 22) 00:20:50.158 11949.134 - 12001.773: 93.6623% ( 20) 00:20:50.158 12001.773 - 12054.413: 93.8217% ( 20) 00:20:50.158 12054.413 - 12107.052: 93.9892% ( 21) 00:20:50.158 12107.052 - 12159.692: 94.1486% ( 20) 00:20:50.158 12159.692 - 12212.331: 94.2761% ( 16) 00:20:50.158 12212.331 - 12264.970: 94.3718% ( 12) 00:20:50.158 12264.970 - 12317.610: 94.4754% ( 13) 00:20:50.158 12317.610 - 12370.249: 94.5871% ( 14) 00:20:50.158 12370.249 - 12422.888: 94.7146% ( 16) 00:20:50.158 12422.888 - 12475.528: 94.8103% ( 12) 00:20:50.158 12475.528 - 12528.167: 94.9059% ( 12) 00:20:50.158 12528.167 - 12580.806: 94.9777% ( 9) 00:20:50.158 12580.806 - 12633.446: 95.0574% ( 10) 00:20:50.158 12633.446 - 12686.085: 95.1371% ( 10) 00:20:50.158 12686.085 - 12738.724: 95.2009% ( 8) 00:20:50.158 12738.724 - 12791.364: 95.2487% ( 6) 00:20:50.158 12791.364 - 12844.003: 95.3045% ( 7) 00:20:50.158 12844.003 - 12896.643: 95.3683% ( 8) 00:20:50.158 12896.643 - 12949.282: 95.4161% ( 6) 00:20:50.158 12949.282 - 13001.921: 95.4719% ( 7) 00:20:50.158 13001.921 - 13054.561: 95.5277% ( 7) 00:20:50.158 13054.561 - 13107.200: 95.5756% ( 6) 00:20:50.158 13107.200 - 13159.839: 95.6234% ( 6) 00:20:50.158 13159.839 - 13212.479: 95.6792% ( 7) 00:20:50.158 13212.479 - 13265.118: 95.7510% ( 9) 00:20:50.158 13265.118 - 13317.757: 95.8307% ( 10) 00:20:50.158 13317.757 - 13370.397: 95.9104% ( 10) 00:20:50.158 13370.397 - 13423.036: 95.9662% ( 7) 00:20:50.158 13423.036 - 13475.676: 96.0061% ( 5) 00:20:50.158 13475.676 - 13580.954: 96.0778% ( 9) 00:20:50.158 13580.954 - 13686.233: 96.1177% ( 5) 00:20:50.158 13686.233 - 13791.512: 96.1575% ( 5) 00:20:50.158 13791.512 - 13896.790: 96.1974% ( 5) 00:20:50.158 13896.790 - 14002.069: 96.2771% ( 10) 00:20:50.158 14002.069 - 14107.348: 96.3489% ( 9) 00:20:50.158 14107.348 - 14212.627: 96.4365% ( 11) 00:20:50.158 14212.627 - 14317.905: 96.5322% ( 12) 00:20:50.158 14317.905 - 14423.184: 96.6438% ( 14) 00:20:50.158 14423.184 - 14528.463: 96.7235% ( 10) 00:20:50.158 14528.463 - 14633.741: 96.7873% ( 8) 00:20:50.159 14633.741 - 14739.020: 96.8511% ( 8) 00:20:50.159 14739.020 - 14844.299: 96.9149% ( 8) 00:20:50.159 14844.299 - 14949.578: 96.9786% ( 8) 00:20:50.159 14949.578 - 15054.856: 97.0424% ( 8) 00:20:50.159 15054.856 - 15160.135: 97.0982% ( 7) 00:20:50.159 15160.135 - 15265.414: 97.1700% ( 9) 00:20:50.159 15265.414 - 15370.692: 97.2417% ( 9) 00:20:50.159 15370.692 - 15475.971: 97.2975% ( 7) 00:20:50.159 15475.971 - 15581.250: 97.3533% ( 7) 00:20:50.159 15581.250 - 15686.529: 97.4091% ( 7) 00:20:50.159 15686.529 - 15791.807: 97.4809% ( 9) 00:20:50.159 15791.807 - 15897.086: 97.5367% ( 7) 00:20:50.159 15897.086 - 16002.365: 97.5686% ( 4) 00:20:50.159 16002.365 - 16107.643: 97.6004% ( 4) 00:20:50.159 16107.643 - 16212.922: 97.6323% ( 4) 00:20:50.159 16212.922 - 16318.201: 97.6642% ( 4) 00:20:50.159 16318.201 - 16423.480: 97.6961% ( 4) 00:20:50.159 16423.480 - 16528.758: 97.7280% ( 4) 00:20:50.159 16528.758 - 16634.037: 97.7599% ( 4) 00:20:50.159 16634.037 - 16739.316: 97.7918% ( 4) 00:20:50.159 16739.316 - 16844.594: 97.8237% ( 4) 00:20:50.159 16844.594 - 16949.873: 97.8555% ( 4) 00:20:50.159 16949.873 - 17055.152: 97.8874% ( 4) 00:20:50.159 17055.152 - 17160.431: 97.9353% ( 6) 00:20:50.159 17160.431 - 17265.709: 97.9911% ( 7) 00:20:50.159 17265.709 - 17370.988: 98.0230% ( 4) 00:20:50.159 17370.988 - 17476.267: 98.0469% ( 3) 00:20:50.159 17476.267 - 17581.545: 98.0708% ( 3) 00:20:50.159 17581.545 - 17686.824: 98.0947% ( 3) 00:20:50.159 17686.824 - 17792.103: 98.1266% ( 4) 00:20:50.159 17792.103 - 17897.382: 98.1505% ( 3) 00:20:50.159 17897.382 - 18002.660: 98.1744% ( 3) 00:20:50.159 18002.660 - 18107.939: 98.1983% ( 3) 00:20:50.159 18107.939 - 18213.218: 98.2143% ( 2) 00:20:50.159 18213.218 - 18318.496: 98.2462% ( 4) 00:20:50.159 18318.496 - 18423.775: 98.2701% ( 3) 00:20:50.159 18423.775 - 18529.054: 98.2940% ( 3) 00:20:50.159 18529.054 - 18634.333: 98.3179% ( 3) 00:20:50.159 18634.333 - 18739.611: 98.3418% ( 3) 00:20:50.159 18739.611 - 18844.890: 98.3817% ( 5) 00:20:50.159 18844.890 - 18950.169: 98.4375% ( 7) 00:20:50.159 18950.169 - 19055.447: 98.4853% ( 6) 00:20:50.159 19055.447 - 19160.726: 98.5332% ( 6) 00:20:50.159 19160.726 - 19266.005: 98.5890% ( 7) 00:20:50.159 19266.005 - 19371.284: 98.6368% ( 6) 00:20:50.159 19371.284 - 19476.562: 98.6846% ( 6) 00:20:50.159 19476.562 - 19581.841: 98.7325% ( 6) 00:20:50.159 19581.841 - 19687.120: 98.7803% ( 6) 00:20:50.159 19687.120 - 19792.398: 98.8361% ( 7) 00:20:50.159 19792.398 - 19897.677: 98.8839% ( 6) 00:20:50.159 19897.677 - 20002.956: 98.9318% ( 6) 00:20:50.159 20002.956 - 20108.235: 98.9876% ( 7) 00:20:50.159 20108.235 - 20213.513: 99.0354% ( 6) 00:20:50.159 20213.513 - 20318.792: 99.0832% ( 6) 00:20:50.159 20318.792 - 20424.071: 99.1311% ( 6) 00:20:50.159 20424.071 - 20529.349: 99.1869% ( 7) 00:20:50.159 20529.349 - 20634.628: 99.2347% ( 6) 00:20:50.159 20634.628 - 20739.907: 99.2825% ( 6) 00:20:50.159 20739.907 - 20845.186: 99.3383% ( 7) 00:20:50.159 20845.186 - 20950.464: 99.3782% ( 5) 00:20:50.159 20950.464 - 21055.743: 99.4021% ( 3) 00:20:50.159 21055.743 - 21161.022: 99.4260% ( 3) 00:20:50.159 21161.022 - 21266.300: 99.4499% ( 3) 00:20:50.159 21266.300 - 21371.579: 99.4818% ( 4) 00:20:50.159 21371.579 - 21476.858: 99.4898% ( 1) 00:20:50.159 26424.957 - 26530.236: 99.5137% ( 3) 00:20:50.159 26530.236 - 26635.515: 99.5376% ( 3) 00:20:50.159 26635.515 - 26740.794: 99.5615% ( 3) 00:20:50.159 26740.794 - 26846.072: 99.5775% ( 2) 00:20:50.159 26846.072 - 26951.351: 99.6014% ( 3) 00:20:50.159 26951.351 - 27161.908: 99.6413% ( 5) 00:20:50.159 27161.908 - 27372.466: 99.6891% ( 6) 00:20:50.159 27372.466 - 27583.023: 99.7369% ( 6) 00:20:50.159 27583.023 - 27793.581: 99.7848% ( 6) 00:20:50.159 27793.581 - 28004.138: 99.8326% ( 6) 00:20:50.159 28004.138 - 28214.696: 99.8804% ( 6) 00:20:50.159 28214.696 - 28425.253: 99.9203% ( 5) 00:20:50.159 28425.253 - 28635.810: 99.9761% ( 7) 00:20:50.159 28635.810 - 28846.368: 100.0000% ( 3) 00:20:50.159 00:20:50.159 16:35:26 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:20:51.539 Initializing NVMe Controllers 00:20:51.539 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:51.539 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:20:51.539 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:20:51.539 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:20:51.539 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:51.539 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:20:51.539 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:20:51.539 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:20:51.539 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:20:51.539 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:20:51.539 Initialization complete. Launching workers. 00:20:51.539 ======================================================== 00:20:51.539 Latency(us) 00:20:51.539 Device Information : IOPS MiB/s Average min max 00:20:51.539 PCIE (0000:00:10.0) NSID 1 from core 0: 9140.64 107.12 14036.97 8981.59 46748.45 00:20:51.539 PCIE (0000:00:11.0) NSID 1 from core 0: 9140.64 107.12 14009.46 9015.29 44259.66 00:20:51.539 PCIE (0000:00:13.0) NSID 1 from core 0: 9140.64 107.12 13981.06 9151.78 42649.07 00:20:51.539 PCIE (0000:00:12.0) NSID 1 from core 0: 9140.64 107.12 13952.43 9273.37 40575.13 00:20:51.539 PCIE (0000:00:12.0) NSID 2 from core 0: 9140.64 107.12 13924.56 9131.73 38227.58 00:20:51.539 PCIE (0000:00:12.0) NSID 3 from core 0: 9140.64 107.12 13897.15 9227.22 36132.06 00:20:51.539 ======================================================== 00:20:51.539 Total : 54843.83 642.70 13966.94 8981.59 46748.45 00:20:51.539 00:20:51.539 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:20:51.539 ================================================================================= 00:20:51.539 1.00000% : 9422.445us 00:20:51.539 10.00000% : 10212.035us 00:20:51.539 25.00000% : 11422.741us 00:20:51.539 50.00000% : 13107.200us 00:20:51.539 75.00000% : 16107.643us 00:20:51.539 90.00000% : 18107.939us 00:20:51.539 95.00000% : 19687.120us 00:20:51.539 98.00000% : 20950.464us 00:20:51.539 99.00000% : 36426.435us 00:20:51.539 99.50000% : 45059.290us 00:20:51.539 99.90000% : 46322.635us 00:20:51.540 99.99000% : 46954.307us 00:20:51.540 99.99900% : 46954.307us 00:20:51.540 99.99990% : 46954.307us 00:20:51.540 99.99999% : 46954.307us 00:20:51.540 00:20:51.540 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:20:51.540 ================================================================================= 00:20:51.540 1.00000% : 9475.084us 00:20:51.540 10.00000% : 10264.675us 00:20:51.540 25.00000% : 11422.741us 00:20:51.540 50.00000% : 13001.921us 00:20:51.540 75.00000% : 16212.922us 00:20:51.540 90.00000% : 18213.218us 00:20:51.540 95.00000% : 19581.841us 00:20:51.540 98.00000% : 20950.464us 00:20:51.540 99.00000% : 35584.206us 00:20:51.540 99.50000% : 42743.158us 00:20:51.540 99.90000% : 44006.503us 00:20:51.540 99.99000% : 44427.618us 00:20:51.540 99.99900% : 44427.618us 00:20:51.540 99.99990% : 44427.618us 00:20:51.540 99.99999% : 44427.618us 00:20:51.540 00:20:51.540 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:20:51.540 ================================================================================= 00:20:51.540 1.00000% : 9633.002us 00:20:51.540 10.00000% : 10212.035us 00:20:51.540 25.00000% : 11422.741us 00:20:51.540 50.00000% : 13159.839us 00:20:51.540 75.00000% : 15897.086us 00:20:51.540 90.00000% : 18423.775us 00:20:51.540 95.00000% : 19266.005us 00:20:51.540 98.00000% : 20213.513us 00:20:51.540 99.00000% : 34110.304us 00:20:51.540 99.50000% : 41058.699us 00:20:51.540 99.90000% : 42322.043us 00:20:51.540 99.99000% : 42743.158us 00:20:51.540 99.99900% : 42743.158us 00:20:51.540 99.99990% : 42743.158us 00:20:51.540 99.99999% : 42743.158us 00:20:51.540 00:20:51.540 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:20:51.540 ================================================================================= 00:20:51.540 1.00000% : 9633.002us 00:20:51.540 10.00000% : 10264.675us 00:20:51.540 25.00000% : 11422.741us 00:20:51.540 50.00000% : 13370.397us 00:20:51.540 75.00000% : 15686.529us 00:20:51.540 90.00000% : 18529.054us 00:20:51.540 95.00000% : 19160.726us 00:20:51.540 98.00000% : 19581.841us 00:20:51.540 99.00000% : 31794.172us 00:20:51.540 99.50000% : 39163.682us 00:20:51.540 99.90000% : 40427.027us 00:20:51.540 99.99000% : 40637.584us 00:20:51.540 99.99900% : 40637.584us 00:20:51.540 99.99990% : 40637.584us 00:20:51.540 99.99999% : 40637.584us 00:20:51.540 00:20:51.540 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:20:51.540 ================================================================================= 00:20:51.540 1.00000% : 9580.363us 00:20:51.540 10.00000% : 10264.675us 00:20:51.540 25.00000% : 11422.741us 00:20:51.540 50.00000% : 13212.479us 00:20:51.540 75.00000% : 15686.529us 00:20:51.540 90.00000% : 18318.496us 00:20:51.540 95.00000% : 19266.005us 00:20:51.540 98.00000% : 20002.956us 00:20:51.540 99.00000% : 29478.040us 00:20:51.540 99.50000% : 36636.993us 00:20:51.540 99.90000% : 38110.895us 00:20:51.540 99.99000% : 38321.452us 00:20:51.540 99.99900% : 38321.452us 00:20:51.540 99.99990% : 38321.452us 00:20:51.540 99.99999% : 38321.452us 00:20:51.540 00:20:51.540 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:20:51.540 ================================================================================= 00:20:51.540 1.00000% : 9580.363us 00:20:51.540 10.00000% : 10264.675us 00:20:51.540 25.00000% : 11475.380us 00:20:51.540 50.00000% : 13212.479us 00:20:51.540 75.00000% : 15897.086us 00:20:51.540 90.00000% : 18107.939us 00:20:51.540 95.00000% : 19476.562us 00:20:51.540 98.00000% : 20318.792us 00:20:51.540 99.00000% : 27583.023us 00:20:51.540 99.50000% : 34531.418us 00:20:51.540 99.90000% : 36005.320us 00:20:51.540 99.99000% : 36215.878us 00:20:51.540 99.99900% : 36215.878us 00:20:51.540 99.99990% : 36215.878us 00:20:51.540 99.99999% : 36215.878us 00:20:51.540 00:20:51.540 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:20:51.540 ============================================================================== 00:20:51.540 Range in us Cumulative IO count 00:20:51.540 8948.691 - 9001.330: 0.0109% ( 1) 00:20:51.540 9053.969 - 9106.609: 0.0219% ( 1) 00:20:51.540 9106.609 - 9159.248: 0.0437% ( 2) 00:20:51.540 9159.248 - 9211.888: 0.0765% ( 3) 00:20:51.540 9211.888 - 9264.527: 0.1420% ( 6) 00:20:51.540 9264.527 - 9317.166: 0.2841% ( 13) 00:20:51.540 9317.166 - 9369.806: 0.7212% ( 40) 00:20:51.540 9369.806 - 9422.445: 1.0927% ( 34) 00:20:51.540 9422.445 - 9475.084: 1.4423% ( 32) 00:20:51.540 9475.084 - 9527.724: 1.9996% ( 51) 00:20:51.540 9527.724 - 9580.363: 2.3820% ( 35) 00:20:51.540 9580.363 - 9633.002: 2.5787% ( 18) 00:20:51.540 9633.002 - 9685.642: 2.9830% ( 37) 00:20:51.540 9685.642 - 9738.281: 3.5621% ( 53) 00:20:51.540 9738.281 - 9790.920: 4.1740% ( 56) 00:20:51.540 9790.920 - 9843.560: 4.9934% ( 75) 00:20:51.540 9843.560 - 9896.199: 6.2500% ( 115) 00:20:51.540 9896.199 - 9948.839: 7.1788% ( 85) 00:20:51.540 9948.839 - 10001.478: 7.7579% ( 53) 00:20:51.540 10001.478 - 10054.117: 8.3370% ( 53) 00:20:51.540 10054.117 - 10106.757: 8.9379% ( 55) 00:20:51.540 10106.757 - 10159.396: 9.5170% ( 53) 00:20:51.540 10159.396 - 10212.035: 10.1726% ( 60) 00:20:51.540 10212.035 - 10264.675: 10.8392% ( 61) 00:20:51.540 10264.675 - 10317.314: 11.3527% ( 47) 00:20:51.540 10317.314 - 10369.953: 12.3252% ( 89) 00:20:51.540 10369.953 - 10422.593: 12.9589% ( 58) 00:20:51.540 10422.593 - 10475.232: 13.6801% ( 66) 00:20:51.540 10475.232 - 10527.871: 14.2701% ( 54) 00:20:51.540 10527.871 - 10580.511: 14.7837% ( 47) 00:20:51.540 10580.511 - 10633.150: 15.3081% ( 48) 00:20:51.540 10633.150 - 10685.790: 15.6250% ( 29) 00:20:51.540 10685.790 - 10738.429: 16.1495% ( 48) 00:20:51.540 10738.429 - 10791.068: 16.7286% ( 53) 00:20:51.540 10791.068 - 10843.708: 17.1766% ( 41) 00:20:51.540 10843.708 - 10896.347: 17.6792% ( 46) 00:20:51.540 10896.347 - 10948.986: 18.4768% ( 73) 00:20:51.540 10948.986 - 11001.626: 19.3510% ( 80) 00:20:51.540 11001.626 - 11054.265: 20.2906% ( 86) 00:20:51.540 11054.265 - 11106.904: 21.0009% ( 65) 00:20:51.540 11106.904 - 11159.544: 21.5909% ( 54) 00:20:51.540 11159.544 - 11212.183: 22.4323% ( 77) 00:20:51.540 11212.183 - 11264.822: 23.1425% ( 65) 00:20:51.540 11264.822 - 11317.462: 23.8636% ( 66) 00:20:51.540 11317.462 - 11370.101: 24.6285% ( 70) 00:20:51.540 11370.101 - 11422.741: 25.2841% ( 60) 00:20:51.540 11422.741 - 11475.380: 26.0380% ( 69) 00:20:51.540 11475.380 - 11528.019: 26.7920% ( 69) 00:20:51.540 11528.019 - 11580.659: 27.9830% ( 109) 00:20:51.540 11580.659 - 11633.298: 29.4034% ( 130) 00:20:51.540 11633.298 - 11685.937: 30.3649% ( 88) 00:20:51.540 11685.937 - 11738.577: 31.4685% ( 101) 00:20:51.540 11738.577 - 11791.216: 32.8234% ( 124) 00:20:51.540 11791.216 - 11843.855: 33.6976% ( 80) 00:20:51.540 11843.855 - 11896.495: 34.3969% ( 64) 00:20:51.540 11896.495 - 11949.134: 34.8995% ( 46) 00:20:51.540 11949.134 - 12001.773: 35.6316% ( 67) 00:20:51.540 12001.773 - 12054.413: 36.3964% ( 70) 00:20:51.540 12054.413 - 12107.052: 37.3689% ( 89) 00:20:51.540 12107.052 - 12159.692: 38.2430% ( 80) 00:20:51.540 12159.692 - 12212.331: 39.1171% ( 80) 00:20:51.540 12212.331 - 12264.970: 39.7837% ( 61) 00:20:51.540 12264.970 - 12317.610: 40.6469% ( 79) 00:20:51.540 12317.610 - 12370.249: 41.4773% ( 76) 00:20:51.540 12370.249 - 12422.888: 42.2749% ( 73) 00:20:51.540 12422.888 - 12475.528: 42.8431% ( 52) 00:20:51.540 12475.528 - 12528.167: 43.8265% ( 90) 00:20:51.540 12528.167 - 12580.806: 44.7880% ( 88) 00:20:51.540 12580.806 - 12633.446: 45.5857% ( 73) 00:20:51.540 12633.446 - 12686.085: 46.0992% ( 47) 00:20:51.540 12686.085 - 12738.724: 46.6455% ( 50) 00:20:51.540 12738.724 - 12791.364: 47.2247% ( 53) 00:20:51.540 12791.364 - 12844.003: 47.8256% ( 55) 00:20:51.540 12844.003 - 12896.643: 48.4375% ( 56) 00:20:51.540 12896.643 - 12949.282: 48.8527% ( 38) 00:20:51.540 12949.282 - 13001.921: 49.1696% ( 29) 00:20:51.540 13001.921 - 13054.561: 49.5629% ( 36) 00:20:51.540 13054.561 - 13107.200: 50.0765% ( 47) 00:20:51.540 13107.200 - 13159.839: 50.5135% ( 40) 00:20:51.540 13159.839 - 13212.479: 50.9943% ( 44) 00:20:51.540 13212.479 - 13265.118: 51.5734% ( 53) 00:20:51.540 13265.118 - 13317.757: 52.2837% ( 65) 00:20:51.540 13317.757 - 13370.397: 52.9283% ( 59) 00:20:51.540 13370.397 - 13423.036: 53.4637% ( 49) 00:20:51.540 13423.036 - 13475.676: 53.8899% ( 39) 00:20:51.540 13475.676 - 13580.954: 55.3868% ( 137) 00:20:51.541 13580.954 - 13686.233: 56.5778% ( 109) 00:20:51.541 13686.233 - 13791.512: 57.4410% ( 79) 00:20:51.541 13791.512 - 13896.790: 58.2496% ( 74) 00:20:51.541 13896.790 - 14002.069: 59.2767% ( 94) 00:20:51.541 14002.069 - 14107.348: 60.0852% ( 74) 00:20:51.541 14107.348 - 14212.627: 60.8173% ( 67) 00:20:51.541 14212.627 - 14317.905: 61.6696% ( 78) 00:20:51.541 14317.905 - 14423.184: 62.3689% ( 64) 00:20:51.541 14423.184 - 14528.463: 63.1884% ( 75) 00:20:51.541 14528.463 - 14633.741: 63.9314% ( 68) 00:20:51.541 14633.741 - 14739.020: 64.9257% ( 91) 00:20:51.541 14739.020 - 14844.299: 65.5048% ( 53) 00:20:51.541 14844.299 - 14949.578: 66.2915% ( 72) 00:20:51.541 14949.578 - 15054.856: 66.9690% ( 62) 00:20:51.541 15054.856 - 15160.135: 67.5153% ( 50) 00:20:51.541 15160.135 - 15265.414: 68.2037% ( 63) 00:20:51.541 15265.414 - 15370.692: 68.9358% ( 67) 00:20:51.541 15370.692 - 15475.971: 69.7225% ( 72) 00:20:51.541 15475.971 - 15581.250: 70.3999% ( 62) 00:20:51.541 15581.250 - 15686.529: 71.2303% ( 76) 00:20:51.541 15686.529 - 15791.807: 72.1154% ( 81) 00:20:51.541 15791.807 - 15897.086: 73.2299% ( 102) 00:20:51.541 15897.086 - 16002.365: 74.6722% ( 132) 00:20:51.541 16002.365 - 16107.643: 75.7976% ( 103) 00:20:51.541 16107.643 - 16212.922: 76.9777% ( 108) 00:20:51.541 16212.922 - 16318.201: 77.9939% ( 93) 00:20:51.541 16318.201 - 16423.480: 78.9554% ( 88) 00:20:51.541 16423.480 - 16528.758: 80.2120% ( 115) 00:20:51.541 16528.758 - 16634.037: 81.1517% ( 86) 00:20:51.541 16634.037 - 16739.316: 82.2225% ( 98) 00:20:51.541 16739.316 - 16844.594: 83.2386% ( 93) 00:20:51.541 16844.594 - 16949.873: 84.2002% ( 88) 00:20:51.541 16949.873 - 17055.152: 85.1508% ( 87) 00:20:51.541 17055.152 - 17160.431: 86.0249% ( 80) 00:20:51.541 17160.431 - 17265.709: 86.6368% ( 56) 00:20:51.541 17265.709 - 17370.988: 87.1285% ( 45) 00:20:51.541 17370.988 - 17476.267: 87.5765% ( 41) 00:20:51.541 17476.267 - 17581.545: 88.0791% ( 46) 00:20:51.541 17581.545 - 17686.824: 88.5162% ( 40) 00:20:51.541 17686.824 - 17792.103: 89.0844% ( 52) 00:20:51.541 17792.103 - 17897.382: 89.5542% ( 43) 00:20:51.541 17897.382 - 18002.660: 89.8274% ( 25) 00:20:51.541 18002.660 - 18107.939: 90.1661% ( 31) 00:20:51.541 18107.939 - 18213.218: 90.4830% ( 29) 00:20:51.541 18213.218 - 18318.496: 90.8108% ( 30) 00:20:51.541 18318.496 - 18423.775: 91.0948% ( 26) 00:20:51.541 18423.775 - 18529.054: 91.3899% ( 27) 00:20:51.541 18529.054 - 18634.333: 91.7067% ( 29) 00:20:51.541 18634.333 - 18739.611: 92.0673% ( 33) 00:20:51.541 18739.611 - 18844.890: 92.4279% ( 33) 00:20:51.541 18844.890 - 18950.169: 92.7885% ( 33) 00:20:51.541 18950.169 - 19055.447: 93.1053% ( 29) 00:20:51.541 19055.447 - 19160.726: 93.4768% ( 34) 00:20:51.541 19160.726 - 19266.005: 93.9795% ( 46) 00:20:51.541 19266.005 - 19371.284: 94.4165% ( 40) 00:20:51.541 19371.284 - 19476.562: 94.7115% ( 27) 00:20:51.541 19476.562 - 19581.841: 94.8864% ( 16) 00:20:51.541 19581.841 - 19687.120: 95.1377% ( 23) 00:20:51.541 19687.120 - 19792.398: 95.4764% ( 31) 00:20:51.541 19792.398 - 19897.677: 95.8260% ( 32) 00:20:51.541 19897.677 - 20002.956: 96.1320% ( 28) 00:20:51.541 20002.956 - 20108.235: 96.4489% ( 29) 00:20:51.541 20108.235 - 20213.513: 96.8313% ( 35) 00:20:51.541 20213.513 - 20318.792: 97.0498% ( 20) 00:20:51.541 20318.792 - 20424.071: 97.2902% ( 22) 00:20:51.541 20424.071 - 20529.349: 97.4978% ( 19) 00:20:51.541 20529.349 - 20634.628: 97.6399% ( 13) 00:20:51.541 20634.628 - 20739.907: 97.8147% ( 16) 00:20:51.541 20739.907 - 20845.186: 97.9349% ( 11) 00:20:51.541 20845.186 - 20950.464: 98.0769% ( 13) 00:20:51.541 20950.464 - 21055.743: 98.2080% ( 12) 00:20:51.541 21055.743 - 21161.022: 98.2845% ( 7) 00:20:51.541 21161.022 - 21266.300: 98.3610% ( 7) 00:20:51.541 21266.300 - 21371.579: 98.4266% ( 6) 00:20:51.541 21371.579 - 21476.858: 98.4703% ( 4) 00:20:51.541 21476.858 - 21582.137: 98.5031% ( 3) 00:20:51.541 21582.137 - 21687.415: 98.5468% ( 4) 00:20:51.541 21687.415 - 21792.694: 98.5795% ( 3) 00:20:51.541 21792.694 - 21897.973: 98.6014% ( 2) 00:20:51.541 34952.533 - 35163.091: 98.6342% ( 3) 00:20:51.541 35163.091 - 35373.648: 98.6997% ( 6) 00:20:51.541 35373.648 - 35584.206: 98.7544% ( 5) 00:20:51.541 35584.206 - 35794.763: 98.8199% ( 6) 00:20:51.541 35794.763 - 36005.320: 98.8746% ( 5) 00:20:51.541 36005.320 - 36215.878: 98.9401% ( 6) 00:20:51.541 36215.878 - 36426.435: 99.0057% ( 6) 00:20:51.541 36426.435 - 36636.993: 99.0712% ( 6) 00:20:51.541 36636.993 - 36847.550: 99.1368% ( 6) 00:20:51.541 36847.550 - 37058.108: 99.2024% ( 6) 00:20:51.541 37058.108 - 37268.665: 99.2679% ( 6) 00:20:51.541 37268.665 - 37479.222: 99.3007% ( 3) 00:20:51.541 44006.503 - 44217.060: 99.3116% ( 1) 00:20:51.541 44217.060 - 44427.618: 99.3772% ( 6) 00:20:51.541 44427.618 - 44638.175: 99.4427% ( 6) 00:20:51.541 44638.175 - 44848.733: 99.4865% ( 4) 00:20:51.541 44848.733 - 45059.290: 99.5520% ( 6) 00:20:51.541 45059.290 - 45269.847: 99.6176% ( 6) 00:20:51.541 45269.847 - 45480.405: 99.6722% ( 5) 00:20:51.541 45480.405 - 45690.962: 99.7378% ( 6) 00:20:51.541 45690.962 - 45901.520: 99.7815% ( 4) 00:20:51.541 45901.520 - 46112.077: 99.8252% ( 4) 00:20:51.541 46112.077 - 46322.635: 99.9017% ( 7) 00:20:51.541 46322.635 - 46533.192: 99.9454% ( 4) 00:20:51.541 46533.192 - 46743.749: 99.9891% ( 4) 00:20:51.541 46743.749 - 46954.307: 100.0000% ( 1) 00:20:51.541 00:20:51.541 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:20:51.541 ============================================================================== 00:20:51.541 Range in us Cumulative IO count 00:20:51.541 9001.330 - 9053.969: 0.0109% ( 1) 00:20:51.541 9106.609 - 9159.248: 0.0546% ( 4) 00:20:51.541 9159.248 - 9211.888: 0.0874% ( 3) 00:20:51.541 9211.888 - 9264.527: 0.1202% ( 3) 00:20:51.541 9264.527 - 9317.166: 0.1639% ( 4) 00:20:51.541 9317.166 - 9369.806: 0.2841% ( 11) 00:20:51.541 9369.806 - 9422.445: 0.7649% ( 44) 00:20:51.541 9422.445 - 9475.084: 1.0708% ( 28) 00:20:51.541 9475.084 - 9527.724: 1.4095% ( 31) 00:20:51.541 9527.724 - 9580.363: 1.9777% ( 52) 00:20:51.541 9580.363 - 9633.002: 2.6224% ( 59) 00:20:51.541 9633.002 - 9685.642: 3.3982% ( 71) 00:20:51.541 9685.642 - 9738.281: 4.3160% ( 84) 00:20:51.541 9738.281 - 9790.920: 5.0699% ( 69) 00:20:51.541 9790.920 - 9843.560: 5.7692% ( 64) 00:20:51.541 9843.560 - 9896.199: 6.3702% ( 55) 00:20:51.541 9896.199 - 9948.839: 6.8619% ( 45) 00:20:51.541 9948.839 - 10001.478: 7.5284% ( 61) 00:20:51.541 10001.478 - 10054.117: 8.1403% ( 56) 00:20:51.541 10054.117 - 10106.757: 8.7085% ( 52) 00:20:51.541 10106.757 - 10159.396: 9.3313% ( 57) 00:20:51.541 10159.396 - 10212.035: 9.9104% ( 53) 00:20:51.541 10212.035 - 10264.675: 10.4786% ( 52) 00:20:51.541 10264.675 - 10317.314: 11.1779% ( 64) 00:20:51.541 10317.314 - 10369.953: 11.6477% ( 43) 00:20:51.541 10369.953 - 10422.593: 12.2924% ( 59) 00:20:51.541 10422.593 - 10475.232: 12.7513% ( 42) 00:20:51.541 10475.232 - 10527.871: 13.2212% ( 43) 00:20:51.541 10527.871 - 10580.511: 13.7128% ( 45) 00:20:51.541 10580.511 - 10633.150: 14.1936% ( 44) 00:20:51.541 10633.150 - 10685.790: 14.7072% ( 47) 00:20:51.541 10685.790 - 10738.429: 15.2098% ( 46) 00:20:51.541 10738.429 - 10791.068: 15.7561% ( 50) 00:20:51.541 10791.068 - 10843.708: 16.5865% ( 76) 00:20:51.541 10843.708 - 10896.347: 17.5699% ( 90) 00:20:51.541 10896.347 - 10948.986: 18.1272% ( 51) 00:20:51.541 10948.986 - 11001.626: 18.9030% ( 71) 00:20:51.541 11001.626 - 11054.265: 19.6788% ( 71) 00:20:51.541 11054.265 - 11106.904: 20.4545% ( 71) 00:20:51.541 11106.904 - 11159.544: 21.3505% ( 82) 00:20:51.541 11159.544 - 11212.183: 22.2247% ( 80) 00:20:51.541 11212.183 - 11264.822: 23.1971% ( 89) 00:20:51.541 11264.822 - 11317.462: 23.8309% ( 58) 00:20:51.541 11317.462 - 11370.101: 24.4974% ( 61) 00:20:51.541 11370.101 - 11422.741: 25.0874% ( 54) 00:20:51.541 11422.741 - 11475.380: 25.6447% ( 51) 00:20:51.541 11475.380 - 11528.019: 26.5625% ( 84) 00:20:51.541 11528.019 - 11580.659: 27.1525% ( 54) 00:20:51.541 11580.659 - 11633.298: 27.7535% ( 55) 00:20:51.541 11633.298 - 11685.937: 28.3982% ( 59) 00:20:51.541 11685.937 - 11738.577: 29.3488% ( 87) 00:20:51.541 11738.577 - 11791.216: 30.2010% ( 78) 00:20:51.541 11791.216 - 11843.855: 31.4576% ( 115) 00:20:51.541 11843.855 - 11896.495: 32.7579% ( 119) 00:20:51.541 11896.495 - 11949.134: 34.0144% ( 115) 00:20:51.541 11949.134 - 12001.773: 35.0415% ( 94) 00:20:51.541 12001.773 - 12054.413: 36.3418% ( 119) 00:20:51.541 12054.413 - 12107.052: 37.3798% ( 95) 00:20:51.541 12107.052 - 12159.692: 38.4615% ( 99) 00:20:51.541 12159.692 - 12212.331: 39.3575% ( 82) 00:20:51.541 12212.331 - 12264.970: 40.5485% ( 109) 00:20:51.541 12264.970 - 12317.610: 41.3571% ( 74) 00:20:51.541 12317.610 - 12370.249: 42.3623% ( 92) 00:20:51.541 12370.249 - 12422.888: 43.3020% ( 86) 00:20:51.541 12422.888 - 12475.528: 44.3182% ( 93) 00:20:51.541 12475.528 - 12528.167: 45.0503% ( 67) 00:20:51.541 12528.167 - 12580.806: 45.9025% ( 78) 00:20:51.541 12580.806 - 12633.446: 46.8969% ( 91) 00:20:51.541 12633.446 - 12686.085: 47.4323% ( 49) 00:20:51.541 12686.085 - 12738.724: 47.9130% ( 44) 00:20:51.541 12738.724 - 12791.364: 48.4484% ( 49) 00:20:51.541 12791.364 - 12844.003: 48.9401% ( 45) 00:20:51.541 12844.003 - 12896.643: 49.3553% ( 38) 00:20:51.541 12896.643 - 12949.282: 49.6831% ( 30) 00:20:51.542 12949.282 - 13001.921: 50.0656% ( 35) 00:20:51.542 13001.921 - 13054.561: 50.5791% ( 47) 00:20:51.542 13054.561 - 13107.200: 51.0708% ( 45) 00:20:51.542 13107.200 - 13159.839: 51.4969% ( 39) 00:20:51.542 13159.839 - 13212.479: 52.2509% ( 69) 00:20:51.542 13212.479 - 13265.118: 52.8955% ( 59) 00:20:51.542 13265.118 - 13317.757: 53.3435% ( 41) 00:20:51.542 13317.757 - 13370.397: 53.7260% ( 35) 00:20:51.542 13370.397 - 13423.036: 54.1412% ( 38) 00:20:51.542 13423.036 - 13475.676: 54.7203% ( 53) 00:20:51.542 13475.676 - 13580.954: 55.4196% ( 64) 00:20:51.542 13580.954 - 13686.233: 56.4139% ( 91) 00:20:51.542 13686.233 - 13791.512: 57.7142% ( 119) 00:20:51.542 13791.512 - 13896.790: 58.8833% ( 107) 00:20:51.542 13896.790 - 14002.069: 60.0634% ( 108) 00:20:51.542 14002.069 - 14107.348: 61.5057% ( 132) 00:20:51.542 14107.348 - 14212.627: 62.6530% ( 105) 00:20:51.542 14212.627 - 14317.905: 63.5162% ( 79) 00:20:51.542 14317.905 - 14423.184: 64.3684% ( 78) 00:20:51.542 14423.184 - 14528.463: 65.2207% ( 78) 00:20:51.542 14528.463 - 14633.741: 66.0730% ( 78) 00:20:51.542 14633.741 - 14739.020: 66.7614% ( 63) 00:20:51.542 14739.020 - 14844.299: 67.2640% ( 46) 00:20:51.542 14844.299 - 14949.578: 67.6573% ( 36) 00:20:51.542 14949.578 - 15054.856: 68.1927% ( 49) 00:20:51.542 15054.856 - 15160.135: 68.6954% ( 46) 00:20:51.542 15160.135 - 15265.414: 69.1434% ( 41) 00:20:51.542 15265.414 - 15370.692: 69.6569% ( 47) 00:20:51.542 15370.692 - 15475.971: 70.3234% ( 61) 00:20:51.542 15475.971 - 15581.250: 71.1538% ( 76) 00:20:51.542 15581.250 - 15686.529: 71.8094% ( 60) 00:20:51.542 15686.529 - 15791.807: 72.3448% ( 49) 00:20:51.542 15791.807 - 15897.086: 72.9021% ( 51) 00:20:51.542 15897.086 - 16002.365: 73.5358% ( 58) 00:20:51.542 16002.365 - 16107.643: 74.4865% ( 87) 00:20:51.542 16107.643 - 16212.922: 75.5245% ( 95) 00:20:51.542 16212.922 - 16318.201: 76.6390% ( 102) 00:20:51.542 16318.201 - 16423.480: 77.4694% ( 76) 00:20:51.542 16423.480 - 16528.758: 78.6495% ( 108) 00:20:51.542 16528.758 - 16634.037: 79.8514% ( 110) 00:20:51.542 16634.037 - 16739.316: 81.2281% ( 126) 00:20:51.542 16739.316 - 16844.594: 82.1569% ( 85) 00:20:51.542 16844.594 - 16949.873: 82.9436% ( 72) 00:20:51.542 16949.873 - 17055.152: 83.7413% ( 73) 00:20:51.542 17055.152 - 17160.431: 84.3969% ( 60) 00:20:51.542 17160.431 - 17265.709: 85.0962% ( 64) 00:20:51.542 17265.709 - 17370.988: 85.8938% ( 73) 00:20:51.542 17370.988 - 17476.267: 86.8335% ( 86) 00:20:51.542 17476.267 - 17581.545: 87.8497% ( 93) 00:20:51.542 17581.545 - 17686.824: 88.3523% ( 46) 00:20:51.542 17686.824 - 17792.103: 88.7675% ( 38) 00:20:51.542 17792.103 - 17897.382: 89.0516% ( 26) 00:20:51.542 17897.382 - 18002.660: 89.3357% ( 26) 00:20:51.542 18002.660 - 18107.939: 89.7618% ( 39) 00:20:51.542 18107.939 - 18213.218: 90.3409% ( 53) 00:20:51.542 18213.218 - 18318.496: 90.9419% ( 55) 00:20:51.542 18318.496 - 18423.775: 91.3571% ( 38) 00:20:51.542 18423.775 - 18529.054: 91.8269% ( 43) 00:20:51.542 18529.054 - 18634.333: 92.1329% ( 28) 00:20:51.542 18634.333 - 18739.611: 92.5372% ( 37) 00:20:51.542 18739.611 - 18844.890: 92.9633% ( 39) 00:20:51.542 18844.890 - 18950.169: 93.2692% ( 28) 00:20:51.542 18950.169 - 19055.447: 93.5424% ( 25) 00:20:51.542 19055.447 - 19160.726: 93.8046% ( 24) 00:20:51.542 19160.726 - 19266.005: 94.1324% ( 30) 00:20:51.542 19266.005 - 19371.284: 94.4712% ( 31) 00:20:51.542 19371.284 - 19476.562: 94.7771% ( 28) 00:20:51.542 19476.562 - 19581.841: 95.0393% ( 24) 00:20:51.542 19581.841 - 19687.120: 95.4108% ( 34) 00:20:51.542 19687.120 - 19792.398: 95.7059% ( 27) 00:20:51.542 19792.398 - 19897.677: 95.9572% ( 23) 00:20:51.542 19897.677 - 20002.956: 96.1429% ( 17) 00:20:51.542 20002.956 - 20108.235: 96.3287% ( 17) 00:20:51.542 20108.235 - 20213.513: 96.5909% ( 24) 00:20:51.542 20213.513 - 20318.792: 96.8531% ( 24) 00:20:51.542 20318.792 - 20424.071: 97.0608% ( 19) 00:20:51.542 20424.071 - 20529.349: 97.2684% ( 19) 00:20:51.542 20529.349 - 20634.628: 97.5743% ( 28) 00:20:51.542 20634.628 - 20739.907: 97.7819% ( 19) 00:20:51.542 20739.907 - 20845.186: 97.9567% ( 16) 00:20:51.542 20845.186 - 20950.464: 98.0660% ( 10) 00:20:51.542 20950.464 - 21055.743: 98.1643% ( 9) 00:20:51.542 21055.743 - 21161.022: 98.2408% ( 7) 00:20:51.542 21161.022 - 21266.300: 98.3282% ( 8) 00:20:51.542 21266.300 - 21371.579: 98.4047% ( 7) 00:20:51.542 21371.579 - 21476.858: 98.4703% ( 6) 00:20:51.542 21476.858 - 21582.137: 98.5249% ( 5) 00:20:51.542 21582.137 - 21687.415: 98.5686% ( 4) 00:20:51.542 21687.415 - 21792.694: 98.6014% ( 3) 00:20:51.542 34110.304 - 34320.861: 98.6451% ( 4) 00:20:51.542 34320.861 - 34531.418: 98.7107% ( 6) 00:20:51.542 34531.418 - 34741.976: 98.7872% ( 7) 00:20:51.542 34741.976 - 34952.533: 98.8527% ( 6) 00:20:51.542 34952.533 - 35163.091: 98.9183% ( 6) 00:20:51.542 35163.091 - 35373.648: 98.9838% ( 6) 00:20:51.542 35373.648 - 35584.206: 99.0603% ( 7) 00:20:51.542 35584.206 - 35794.763: 99.1259% ( 6) 00:20:51.542 35794.763 - 36005.320: 99.2024% ( 7) 00:20:51.542 36005.320 - 36215.878: 99.2788% ( 7) 00:20:51.542 36215.878 - 36426.435: 99.3007% ( 2) 00:20:51.542 41900.929 - 42111.486: 99.3553% ( 5) 00:20:51.542 42111.486 - 42322.043: 99.4318% ( 7) 00:20:51.542 42322.043 - 42532.601: 99.4755% ( 4) 00:20:51.542 42532.601 - 42743.158: 99.5302% ( 5) 00:20:51.542 42743.158 - 42953.716: 99.5957% ( 6) 00:20:51.542 42953.716 - 43164.273: 99.6613% ( 6) 00:20:51.542 43164.273 - 43374.831: 99.7268% ( 6) 00:20:51.542 43374.831 - 43585.388: 99.7924% ( 6) 00:20:51.542 43585.388 - 43795.945: 99.8580% ( 6) 00:20:51.542 43795.945 - 44006.503: 99.9126% ( 5) 00:20:51.542 44006.503 - 44217.060: 99.9781% ( 6) 00:20:51.542 44217.060 - 44427.618: 100.0000% ( 2) 00:20:51.542 00:20:51.542 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:20:51.542 ============================================================================== 00:20:51.542 Range in us Cumulative IO count 00:20:51.542 9106.609 - 9159.248: 0.0109% ( 1) 00:20:51.542 9264.527 - 9317.166: 0.0219% ( 1) 00:20:51.542 9317.166 - 9369.806: 0.0656% ( 4) 00:20:51.542 9369.806 - 9422.445: 0.1967% ( 12) 00:20:51.542 9422.445 - 9475.084: 0.2732% ( 7) 00:20:51.542 9475.084 - 9527.724: 0.4261% ( 14) 00:20:51.542 9527.724 - 9580.363: 0.6228% ( 18) 00:20:51.542 9580.363 - 9633.002: 1.0162% ( 36) 00:20:51.542 9633.002 - 9685.642: 1.4751% ( 42) 00:20:51.542 9685.642 - 9738.281: 2.0979% ( 57) 00:20:51.542 9738.281 - 9790.920: 3.1906% ( 100) 00:20:51.542 9790.920 - 9843.560: 4.3051% ( 102) 00:20:51.542 9843.560 - 9896.199: 5.3212% ( 93) 00:20:51.542 9896.199 - 9948.839: 6.3702% ( 96) 00:20:51.542 9948.839 - 10001.478: 6.9493% ( 53) 00:20:51.542 10001.478 - 10054.117: 7.8125% ( 79) 00:20:51.542 10054.117 - 10106.757: 8.5337% ( 66) 00:20:51.542 10106.757 - 10159.396: 9.2548% ( 66) 00:20:51.542 10159.396 - 10212.035: 10.1180% ( 79) 00:20:51.542 10212.035 - 10264.675: 11.2216% ( 101) 00:20:51.542 10264.675 - 10317.314: 11.9427% ( 66) 00:20:51.542 10317.314 - 10369.953: 12.5765% ( 58) 00:20:51.542 10369.953 - 10422.593: 13.3413% ( 70) 00:20:51.542 10422.593 - 10475.232: 13.9423% ( 55) 00:20:51.542 10475.232 - 10527.871: 14.4012% ( 42) 00:20:51.542 10527.871 - 10580.511: 14.7946% ( 36) 00:20:51.542 10580.511 - 10633.150: 15.2316% ( 40) 00:20:51.542 10633.150 - 10685.790: 15.6359% ( 37) 00:20:51.542 10685.790 - 10738.429: 16.1058% ( 43) 00:20:51.542 10738.429 - 10791.068: 16.8488% ( 68) 00:20:51.542 10791.068 - 10843.708: 17.5590% ( 65) 00:20:51.542 10843.708 - 10896.347: 18.2255% ( 61) 00:20:51.542 10896.347 - 10948.986: 18.8156% ( 54) 00:20:51.542 10948.986 - 11001.626: 19.6241% ( 74) 00:20:51.542 11001.626 - 11054.265: 20.8260% ( 110) 00:20:51.542 11054.265 - 11106.904: 21.4707% ( 59) 00:20:51.542 11106.904 - 11159.544: 22.1809% ( 65) 00:20:51.542 11159.544 - 11212.183: 22.6726% ( 45) 00:20:51.542 11212.183 - 11264.822: 23.1862% ( 47) 00:20:51.542 11264.822 - 11317.462: 23.9729% ( 72) 00:20:51.542 11317.462 - 11370.101: 24.5520% ( 53) 00:20:51.542 11370.101 - 11422.741: 25.0328% ( 44) 00:20:51.542 11422.741 - 11475.380: 25.3715% ( 31) 00:20:51.542 11475.380 - 11528.019: 25.8741% ( 46) 00:20:51.542 11528.019 - 11580.659: 26.5516% ( 62) 00:20:51.542 11580.659 - 11633.298: 27.5240% ( 89) 00:20:51.542 11633.298 - 11685.937: 28.2780% ( 69) 00:20:51.542 11685.937 - 11738.577: 29.5455% ( 116) 00:20:51.542 11738.577 - 11791.216: 30.7365% ( 109) 00:20:51.542 11791.216 - 11843.855: 32.1569% ( 130) 00:20:51.542 11843.855 - 11896.495: 33.6976% ( 141) 00:20:51.542 11896.495 - 11949.134: 35.0087% ( 120) 00:20:51.542 11949.134 - 12001.773: 35.9047% ( 82) 00:20:51.542 12001.773 - 12054.413: 36.8335% ( 85) 00:20:51.542 12054.413 - 12107.052: 37.6530% ( 75) 00:20:51.542 12107.052 - 12159.692: 38.5271% ( 80) 00:20:51.542 12159.692 - 12212.331: 39.5323% ( 92) 00:20:51.542 12212.331 - 12264.970: 40.5376% ( 92) 00:20:51.542 12264.970 - 12317.610: 41.2587% ( 66) 00:20:51.542 12317.610 - 12370.249: 42.1875% ( 85) 00:20:51.542 12370.249 - 12422.888: 43.2583% ( 98) 00:20:51.542 12422.888 - 12475.528: 43.8483% ( 54) 00:20:51.542 12475.528 - 12528.167: 44.3073% ( 42) 00:20:51.542 12528.167 - 12580.806: 44.7771% ( 43) 00:20:51.543 12580.806 - 12633.446: 45.1377% ( 33) 00:20:51.543 12633.446 - 12686.085: 45.4218% ( 26) 00:20:51.543 12686.085 - 12738.724: 45.7933% ( 34) 00:20:51.543 12738.724 - 12791.364: 46.2959% ( 46) 00:20:51.543 12791.364 - 12844.003: 46.5909% ( 27) 00:20:51.543 12844.003 - 12896.643: 47.0826% ( 45) 00:20:51.543 12896.643 - 12949.282: 47.6071% ( 48) 00:20:51.543 12949.282 - 13001.921: 48.3938% ( 72) 00:20:51.543 13001.921 - 13054.561: 48.8636% ( 43) 00:20:51.543 13054.561 - 13107.200: 49.4755% ( 56) 00:20:51.543 13107.200 - 13159.839: 50.0656% ( 54) 00:20:51.543 13159.839 - 13212.479: 50.6447% ( 53) 00:20:51.543 13212.479 - 13265.118: 51.2347% ( 54) 00:20:51.543 13265.118 - 13317.757: 51.9886% ( 69) 00:20:51.543 13317.757 - 13370.397: 52.5022% ( 47) 00:20:51.543 13370.397 - 13423.036: 52.8846% ( 35) 00:20:51.543 13423.036 - 13475.676: 53.3435% ( 42) 00:20:51.543 13475.676 - 13580.954: 54.3269% ( 90) 00:20:51.543 13580.954 - 13686.233: 55.6163% ( 118) 00:20:51.543 13686.233 - 13791.512: 57.6158% ( 183) 00:20:51.543 13791.512 - 13896.790: 58.9926% ( 126) 00:20:51.543 13896.790 - 14002.069: 60.1726% ( 108) 00:20:51.543 14002.069 - 14107.348: 61.3199% ( 105) 00:20:51.543 14107.348 - 14212.627: 62.2924% ( 89) 00:20:51.543 14212.627 - 14317.905: 63.6364% ( 123) 00:20:51.543 14317.905 - 14423.184: 64.7399% ( 101) 00:20:51.543 14423.184 - 14528.463: 65.6578% ( 84) 00:20:51.543 14528.463 - 14633.741: 67.1110% ( 133) 00:20:51.543 14633.741 - 14739.020: 68.0835% ( 89) 00:20:51.543 14739.020 - 14844.299: 68.9685% ( 81) 00:20:51.543 14844.299 - 14949.578: 69.6351% ( 61) 00:20:51.543 14949.578 - 15054.856: 70.2579% ( 57) 00:20:51.543 15054.856 - 15160.135: 70.9572% ( 64) 00:20:51.543 15160.135 - 15265.414: 71.5581% ( 55) 00:20:51.543 15265.414 - 15370.692: 72.1482% ( 54) 00:20:51.543 15370.692 - 15475.971: 72.9458% ( 73) 00:20:51.543 15475.971 - 15581.250: 73.4812% ( 49) 00:20:51.543 15581.250 - 15686.529: 74.0166% ( 49) 00:20:51.543 15686.529 - 15791.807: 74.6066% ( 54) 00:20:51.543 15791.807 - 15897.086: 75.3169% ( 65) 00:20:51.543 15897.086 - 16002.365: 76.1801% ( 79) 00:20:51.543 16002.365 - 16107.643: 77.0323% ( 78) 00:20:51.543 16107.643 - 16212.922: 77.7316% ( 64) 00:20:51.543 16212.922 - 16318.201: 78.2124% ( 44) 00:20:51.543 16318.201 - 16423.480: 78.6932% ( 44) 00:20:51.543 16423.480 - 16528.758: 79.1849% ( 45) 00:20:51.543 16528.758 - 16634.037: 79.6329% ( 41) 00:20:51.543 16634.037 - 16739.316: 80.1136% ( 44) 00:20:51.543 16739.316 - 16844.594: 80.6053% ( 45) 00:20:51.543 16844.594 - 16949.873: 81.1954% ( 54) 00:20:51.543 16949.873 - 17055.152: 81.7308% ( 49) 00:20:51.543 17055.152 - 17160.431: 82.3427% ( 56) 00:20:51.543 17160.431 - 17265.709: 83.0857% ( 68) 00:20:51.543 17265.709 - 17370.988: 83.4135% ( 30) 00:20:51.543 17370.988 - 17476.267: 83.7850% ( 34) 00:20:51.543 17476.267 - 17581.545: 84.3641% ( 53) 00:20:51.543 17581.545 - 17686.824: 84.9104% ( 50) 00:20:51.543 17686.824 - 17792.103: 85.6643% ( 69) 00:20:51.543 17792.103 - 17897.382: 86.4292% ( 70) 00:20:51.543 17897.382 - 18002.660: 87.2705% ( 77) 00:20:51.543 18002.660 - 18107.939: 88.1228% ( 78) 00:20:51.543 18107.939 - 18213.218: 89.1608% ( 95) 00:20:51.543 18213.218 - 18318.496: 89.9148% ( 69) 00:20:51.543 18318.496 - 18423.775: 90.6141% ( 64) 00:20:51.543 18423.775 - 18529.054: 91.1495% ( 49) 00:20:51.543 18529.054 - 18634.333: 91.5975% ( 41) 00:20:51.543 18634.333 - 18739.611: 91.8488% ( 23) 00:20:51.543 18739.611 - 18844.890: 92.2640% ( 38) 00:20:51.543 18844.890 - 18950.169: 92.7994% ( 49) 00:20:51.543 18950.169 - 19055.447: 93.7063% ( 83) 00:20:51.543 19055.447 - 19160.726: 94.5913% ( 81) 00:20:51.543 19160.726 - 19266.005: 95.1814% ( 54) 00:20:51.543 19266.005 - 19371.284: 95.6294% ( 41) 00:20:51.543 19371.284 - 19476.562: 95.9681% ( 31) 00:20:51.543 19476.562 - 19581.841: 96.3177% ( 32) 00:20:51.543 19581.841 - 19687.120: 96.6892% ( 34) 00:20:51.543 19687.120 - 19792.398: 96.9624% ( 25) 00:20:51.543 19792.398 - 19897.677: 97.3011% ( 31) 00:20:51.543 19897.677 - 20002.956: 97.5852% ( 26) 00:20:51.543 20002.956 - 20108.235: 97.8693% ( 26) 00:20:51.543 20108.235 - 20213.513: 98.0988% ( 21) 00:20:51.543 20213.513 - 20318.792: 98.2408% ( 13) 00:20:51.543 20318.792 - 20424.071: 98.3719% ( 12) 00:20:51.543 20424.071 - 20529.349: 98.4812% ( 10) 00:20:51.543 20529.349 - 20634.628: 98.5249% ( 4) 00:20:51.543 20634.628 - 20739.907: 98.5795% ( 5) 00:20:51.543 20739.907 - 20845.186: 98.6014% ( 2) 00:20:51.543 32425.844 - 32636.402: 98.6123% ( 1) 00:20:51.543 32636.402 - 32846.959: 98.6670% ( 5) 00:20:51.543 32846.959 - 33057.516: 98.7325% ( 6) 00:20:51.543 33057.516 - 33268.074: 98.7981% ( 6) 00:20:51.543 33268.074 - 33478.631: 98.8746% ( 7) 00:20:51.543 33478.631 - 33689.189: 98.9292% ( 5) 00:20:51.543 33689.189 - 33899.746: 98.9838% ( 5) 00:20:51.543 33899.746 - 34110.304: 99.0494% ( 6) 00:20:51.543 34110.304 - 34320.861: 99.1040% ( 5) 00:20:51.543 34320.861 - 34531.418: 99.1696% ( 6) 00:20:51.543 34531.418 - 34741.976: 99.2351% ( 6) 00:20:51.543 34741.976 - 34952.533: 99.3007% ( 6) 00:20:51.543 40216.469 - 40427.027: 99.3226% ( 2) 00:20:51.543 40427.027 - 40637.584: 99.3881% ( 6) 00:20:51.543 40637.584 - 40848.141: 99.4537% ( 6) 00:20:51.543 40848.141 - 41058.699: 99.5083% ( 5) 00:20:51.543 41058.699 - 41269.256: 99.5629% ( 5) 00:20:51.543 41269.256 - 41479.814: 99.6285% ( 6) 00:20:51.543 41479.814 - 41690.371: 99.7050% ( 7) 00:20:51.543 41690.371 - 41900.929: 99.7596% ( 5) 00:20:51.543 41900.929 - 42111.486: 99.8361% ( 7) 00:20:51.543 42111.486 - 42322.043: 99.9017% ( 6) 00:20:51.543 42322.043 - 42532.601: 99.9563% ( 5) 00:20:51.543 42532.601 - 42743.158: 100.0000% ( 4) 00:20:51.543 00:20:51.543 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:20:51.543 ============================================================================== 00:20:51.543 Range in us Cumulative IO count 00:20:51.543 9264.527 - 9317.166: 0.0219% ( 2) 00:20:51.543 9369.806 - 9422.445: 0.0765% ( 5) 00:20:51.543 9422.445 - 9475.084: 0.1967% ( 11) 00:20:51.543 9475.084 - 9527.724: 0.4152% ( 20) 00:20:51.543 9527.724 - 9580.363: 0.6993% ( 26) 00:20:51.543 9580.363 - 9633.002: 1.1364% ( 40) 00:20:51.543 9633.002 - 9685.642: 1.8138% ( 62) 00:20:51.543 9685.642 - 9738.281: 2.4913% ( 62) 00:20:51.543 9738.281 - 9790.920: 3.4309% ( 86) 00:20:51.543 9790.920 - 9843.560: 4.2832% ( 78) 00:20:51.543 9843.560 - 9896.199: 5.3649% ( 99) 00:20:51.543 9896.199 - 9948.839: 6.3702% ( 92) 00:20:51.543 9948.839 - 10001.478: 7.0149% ( 59) 00:20:51.543 10001.478 - 10054.117: 7.7032% ( 63) 00:20:51.543 10054.117 - 10106.757: 8.3916% ( 63) 00:20:51.543 10106.757 - 10159.396: 9.0035% ( 56) 00:20:51.543 10159.396 - 10212.035: 9.6919% ( 63) 00:20:51.543 10212.035 - 10264.675: 10.6971% ( 92) 00:20:51.543 10264.675 - 10317.314: 11.8881% ( 109) 00:20:51.543 10317.314 - 10369.953: 12.9589% ( 98) 00:20:51.543 10369.953 - 10422.593: 13.9205% ( 88) 00:20:51.543 10422.593 - 10475.232: 14.7399% ( 75) 00:20:51.543 10475.232 - 10527.871: 15.3409% ( 55) 00:20:51.543 10527.871 - 10580.511: 15.8435% ( 46) 00:20:51.543 10580.511 - 10633.150: 16.4991% ( 60) 00:20:51.543 10633.150 - 10685.790: 16.8925% ( 36) 00:20:51.543 10685.790 - 10738.429: 17.2858% ( 36) 00:20:51.543 10738.429 - 10791.068: 17.6901% ( 37) 00:20:51.543 10791.068 - 10843.708: 18.3129% ( 57) 00:20:51.543 10843.708 - 10896.347: 18.9795% ( 61) 00:20:51.543 10896.347 - 10948.986: 19.4712% ( 45) 00:20:51.543 10948.986 - 11001.626: 19.8645% ( 36) 00:20:51.543 11001.626 - 11054.265: 20.3453% ( 44) 00:20:51.543 11054.265 - 11106.904: 20.7386% ( 36) 00:20:51.543 11106.904 - 11159.544: 21.1976% ( 42) 00:20:51.543 11159.544 - 11212.183: 21.9515% ( 69) 00:20:51.543 11212.183 - 11264.822: 22.8256% ( 80) 00:20:51.543 11264.822 - 11317.462: 23.8636% ( 95) 00:20:51.543 11317.462 - 11370.101: 24.5520% ( 63) 00:20:51.543 11370.101 - 11422.741: 25.3278% ( 71) 00:20:51.543 11422.741 - 11475.380: 25.8851% ( 51) 00:20:51.543 11475.380 - 11528.019: 26.6062% ( 66) 00:20:51.543 11528.019 - 11580.659: 27.1962% ( 54) 00:20:51.543 11580.659 - 11633.298: 27.9939% ( 73) 00:20:51.543 11633.298 - 11685.937: 28.7697% ( 71) 00:20:51.543 11685.937 - 11738.577: 29.6438% ( 80) 00:20:51.543 11738.577 - 11791.216: 31.0315% ( 127) 00:20:51.543 11791.216 - 11843.855: 32.1132% ( 99) 00:20:51.543 11843.855 - 11896.495: 33.1075% ( 91) 00:20:51.543 11896.495 - 11949.134: 34.0691% ( 88) 00:20:51.543 11949.134 - 12001.773: 35.0415% ( 89) 00:20:51.543 12001.773 - 12054.413: 35.9266% ( 81) 00:20:51.543 12054.413 - 12107.052: 36.5822% ( 60) 00:20:51.543 12107.052 - 12159.692: 37.3580% ( 71) 00:20:51.543 12159.692 - 12212.331: 38.1119% ( 69) 00:20:51.543 12212.331 - 12264.970: 38.8003% ( 63) 00:20:51.543 12264.970 - 12317.610: 39.6744% ( 80) 00:20:51.543 12317.610 - 12370.249: 40.6250% ( 87) 00:20:51.543 12370.249 - 12422.888: 41.4445% ( 75) 00:20:51.543 12422.888 - 12475.528: 42.0455% ( 55) 00:20:51.543 12475.528 - 12528.167: 42.7666% ( 66) 00:20:51.543 12528.167 - 12580.806: 43.3129% ( 50) 00:20:51.543 12580.806 - 12633.446: 43.8702% ( 51) 00:20:51.543 12633.446 - 12686.085: 44.3947% ( 48) 00:20:51.543 12686.085 - 12738.724: 44.8317% ( 40) 00:20:51.543 12738.724 - 12791.364: 45.3344% ( 46) 00:20:51.544 12791.364 - 12844.003: 45.9790% ( 59) 00:20:51.544 12844.003 - 12896.643: 46.5691% ( 54) 00:20:51.544 12896.643 - 12949.282: 47.0280% ( 42) 00:20:51.544 12949.282 - 13001.921: 47.4760% ( 41) 00:20:51.544 13001.921 - 13054.561: 47.9021% ( 39) 00:20:51.544 13054.561 - 13107.200: 48.3829% ( 44) 00:20:51.544 13107.200 - 13159.839: 48.7653% ( 35) 00:20:51.544 13159.839 - 13212.479: 49.1259% ( 33) 00:20:51.544 13212.479 - 13265.118: 49.6066% ( 44) 00:20:51.544 13265.118 - 13317.757: 49.9891% ( 35) 00:20:51.544 13317.757 - 13370.397: 50.5682% ( 53) 00:20:51.544 13370.397 - 13423.036: 51.1691% ( 55) 00:20:51.544 13423.036 - 13475.676: 51.9559% ( 72) 00:20:51.544 13475.676 - 13580.954: 53.5293% ( 144) 00:20:51.544 13580.954 - 13686.233: 54.8733% ( 123) 00:20:51.544 13686.233 - 13791.512: 56.1407% ( 116) 00:20:51.544 13791.512 - 13896.790: 57.4191% ( 117) 00:20:51.544 13896.790 - 14002.069: 58.5992% ( 108) 00:20:51.544 14002.069 - 14107.348: 59.7247% ( 103) 00:20:51.544 14107.348 - 14212.627: 61.1779% ( 133) 00:20:51.544 14212.627 - 14317.905: 62.3033% ( 103) 00:20:51.544 14317.905 - 14423.184: 63.6910% ( 127) 00:20:51.544 14423.184 - 14528.463: 64.8711% ( 108) 00:20:51.544 14528.463 - 14633.741: 65.7670% ( 82) 00:20:51.544 14633.741 - 14739.020: 66.7504% ( 90) 00:20:51.544 14739.020 - 14844.299: 67.7775% ( 94) 00:20:51.544 14844.299 - 14949.578: 68.9576% ( 108) 00:20:51.544 14949.578 - 15054.856: 69.8973% ( 86) 00:20:51.544 15054.856 - 15160.135: 70.7168% ( 75) 00:20:51.544 15160.135 - 15265.414: 71.6565% ( 86) 00:20:51.544 15265.414 - 15370.692: 72.4541% ( 73) 00:20:51.544 15370.692 - 15475.971: 73.3829% ( 85) 00:20:51.544 15475.971 - 15581.250: 74.5192% ( 104) 00:20:51.544 15581.250 - 15686.529: 75.3387% ( 75) 00:20:51.544 15686.529 - 15791.807: 75.9615% ( 57) 00:20:51.544 15791.807 - 15897.086: 76.7810% ( 75) 00:20:51.544 15897.086 - 16002.365: 77.4038% ( 57) 00:20:51.544 16002.365 - 16107.643: 77.8737% ( 43) 00:20:51.544 16107.643 - 16212.922: 78.6058% ( 67) 00:20:51.544 16212.922 - 16318.201: 79.2832% ( 62) 00:20:51.544 16318.201 - 16423.480: 79.8951% ( 56) 00:20:51.544 16423.480 - 16528.758: 80.3540% ( 42) 00:20:51.544 16528.758 - 16634.037: 80.8129% ( 42) 00:20:51.544 16634.037 - 16739.316: 81.2828% ( 43) 00:20:51.544 16739.316 - 16844.594: 81.6652% ( 35) 00:20:51.544 16844.594 - 16949.873: 81.9602% ( 27) 00:20:51.544 16949.873 - 17055.152: 82.2225% ( 24) 00:20:51.544 17055.152 - 17160.431: 82.5830% ( 33) 00:20:51.544 17160.431 - 17265.709: 83.0529% ( 43) 00:20:51.544 17265.709 - 17370.988: 83.6211% ( 52) 00:20:51.544 17370.988 - 17476.267: 84.3094% ( 63) 00:20:51.544 17476.267 - 17581.545: 84.7902% ( 44) 00:20:51.544 17581.545 - 17686.824: 85.2491% ( 42) 00:20:51.544 17686.824 - 17792.103: 85.8392% ( 54) 00:20:51.544 17792.103 - 17897.382: 86.4292% ( 54) 00:20:51.544 17897.382 - 18002.660: 86.8881% ( 42) 00:20:51.544 18002.660 - 18107.939: 87.3361% ( 41) 00:20:51.544 18107.939 - 18213.218: 87.8497% ( 47) 00:20:51.544 18213.218 - 18318.496: 88.7566% ( 83) 00:20:51.544 18318.496 - 18423.775: 89.8601% ( 101) 00:20:51.544 18423.775 - 18529.054: 90.8872% ( 94) 00:20:51.544 18529.054 - 18634.333: 91.7504% ( 79) 00:20:51.544 18634.333 - 18739.611: 92.3733% ( 57) 00:20:51.544 18739.611 - 18844.890: 93.0726% ( 64) 00:20:51.544 18844.890 - 18950.169: 93.7828% ( 65) 00:20:51.544 18950.169 - 19055.447: 94.5149% ( 67) 00:20:51.544 19055.447 - 19160.726: 95.2797% ( 70) 00:20:51.544 19160.726 - 19266.005: 96.2740% ( 91) 00:20:51.544 19266.005 - 19371.284: 97.2137% ( 86) 00:20:51.544 19371.284 - 19476.562: 97.7819% ( 52) 00:20:51.544 19476.562 - 19581.841: 98.0332% ( 23) 00:20:51.544 19581.841 - 19687.120: 98.1862% ( 14) 00:20:51.544 19687.120 - 19792.398: 98.2627% ( 7) 00:20:51.544 19792.398 - 19897.677: 98.3501% ( 8) 00:20:51.544 19897.677 - 20002.956: 98.4156% ( 6) 00:20:51.544 20002.956 - 20108.235: 98.5031% ( 8) 00:20:51.544 20108.235 - 20213.513: 98.5468% ( 4) 00:20:51.544 20213.513 - 20318.792: 98.5905% ( 4) 00:20:51.544 20318.792 - 20424.071: 98.6014% ( 1) 00:20:51.544 30320.270 - 30530.827: 98.6342% ( 3) 00:20:51.544 30530.827 - 30741.385: 98.6997% ( 6) 00:20:51.544 30741.385 - 30951.942: 98.7544% ( 5) 00:20:51.544 30951.942 - 31162.500: 98.8309% ( 7) 00:20:51.544 31162.500 - 31373.057: 98.8855% ( 5) 00:20:51.544 31373.057 - 31583.614: 98.9510% ( 6) 00:20:51.544 31583.614 - 31794.172: 99.0166% ( 6) 00:20:51.544 31794.172 - 32004.729: 99.0822% ( 6) 00:20:51.544 32004.729 - 32215.287: 99.1477% ( 6) 00:20:51.544 32215.287 - 32425.844: 99.2024% ( 5) 00:20:51.544 32425.844 - 32636.402: 99.2679% ( 6) 00:20:51.544 32636.402 - 32846.959: 99.3007% ( 3) 00:20:51.544 38321.452 - 38532.010: 99.3663% ( 6) 00:20:51.544 38532.010 - 38742.567: 99.4318% ( 6) 00:20:51.544 38742.567 - 38953.124: 99.4865% ( 5) 00:20:51.544 38953.124 - 39163.682: 99.5629% ( 7) 00:20:51.544 39163.682 - 39374.239: 99.6176% ( 5) 00:20:51.544 39374.239 - 39584.797: 99.6831% ( 6) 00:20:51.544 39584.797 - 39795.354: 99.7487% ( 6) 00:20:51.544 39795.354 - 40005.912: 99.8142% ( 6) 00:20:51.544 40005.912 - 40216.469: 99.8907% ( 7) 00:20:51.544 40216.469 - 40427.027: 99.9454% ( 5) 00:20:51.544 40427.027 - 40637.584: 100.0000% ( 5) 00:20:51.544 00:20:51.544 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:20:51.544 ============================================================================== 00:20:51.544 Range in us Cumulative IO count 00:20:51.544 9106.609 - 9159.248: 0.0109% ( 1) 00:20:51.544 9159.248 - 9211.888: 0.0219% ( 1) 00:20:51.544 9211.888 - 9264.527: 0.0328% ( 1) 00:20:51.544 9264.527 - 9317.166: 0.0546% ( 2) 00:20:51.544 9369.806 - 9422.445: 0.1311% ( 7) 00:20:51.544 9422.445 - 9475.084: 0.3278% ( 18) 00:20:51.544 9475.084 - 9527.724: 0.6010% ( 25) 00:20:51.544 9527.724 - 9580.363: 1.1691% ( 52) 00:20:51.544 9580.363 - 9633.002: 2.0433% ( 80) 00:20:51.544 9633.002 - 9685.642: 2.6552% ( 56) 00:20:51.544 9685.642 - 9738.281: 3.3982% ( 68) 00:20:51.544 9738.281 - 9790.920: 4.2395% ( 77) 00:20:51.544 9790.920 - 9843.560: 4.9388% ( 64) 00:20:51.544 9843.560 - 9896.199: 5.8785% ( 86) 00:20:51.544 9896.199 - 9948.839: 6.8182% ( 86) 00:20:51.544 9948.839 - 10001.478: 7.5612% ( 68) 00:20:51.544 10001.478 - 10054.117: 7.9873% ( 39) 00:20:51.544 10054.117 - 10106.757: 8.4025% ( 38) 00:20:51.544 10106.757 - 10159.396: 9.0581% ( 60) 00:20:51.544 10159.396 - 10212.035: 9.5608% ( 46) 00:20:51.544 10212.035 - 10264.675: 10.4130% ( 78) 00:20:51.544 10264.675 - 10317.314: 11.1560% ( 68) 00:20:51.544 10317.314 - 10369.953: 11.8116% ( 60) 00:20:51.544 10369.953 - 10422.593: 12.6311% ( 75) 00:20:51.544 10422.593 - 10475.232: 13.6254% ( 91) 00:20:51.544 10475.232 - 10527.871: 14.4122% ( 72) 00:20:51.544 10527.871 - 10580.511: 15.2753% ( 79) 00:20:51.544 10580.511 - 10633.150: 15.8872% ( 56) 00:20:51.544 10633.150 - 10685.790: 16.5210% ( 58) 00:20:51.544 10685.790 - 10738.429: 17.3077% ( 72) 00:20:51.544 10738.429 - 10791.068: 17.9196% ( 56) 00:20:51.544 10791.068 - 10843.708: 18.4878% ( 52) 00:20:51.545 10843.708 - 10896.347: 19.1324% ( 59) 00:20:51.545 10896.347 - 10948.986: 20.0066% ( 80) 00:20:51.545 10948.986 - 11001.626: 20.5201% ( 47) 00:20:51.545 11001.626 - 11054.265: 21.1757% ( 60) 00:20:51.545 11054.265 - 11106.904: 21.7548% ( 53) 00:20:51.545 11106.904 - 11159.544: 22.2793% ( 48) 00:20:51.545 11159.544 - 11212.183: 22.8038% ( 48) 00:20:51.545 11212.183 - 11264.822: 23.3282% ( 48) 00:20:51.545 11264.822 - 11317.462: 23.9838% ( 60) 00:20:51.545 11317.462 - 11370.101: 24.6941% ( 65) 00:20:51.545 11370.101 - 11422.741: 25.4043% ( 65) 00:20:51.545 11422.741 - 11475.380: 26.2456% ( 77) 00:20:51.545 11475.380 - 11528.019: 26.8794% ( 58) 00:20:51.545 11528.019 - 11580.659: 27.6552% ( 71) 00:20:51.545 11580.659 - 11633.298: 28.7260% ( 98) 00:20:51.545 11633.298 - 11685.937: 29.6656% ( 86) 00:20:51.545 11685.937 - 11738.577: 30.3212% ( 60) 00:20:51.545 11738.577 - 11791.216: 31.1954% ( 80) 00:20:51.545 11791.216 - 11843.855: 32.4301% ( 113) 00:20:51.545 11843.855 - 11896.495: 33.3479% ( 84) 00:20:51.545 11896.495 - 11949.134: 34.2657% ( 84) 00:20:51.545 11949.134 - 12001.773: 35.3693% ( 101) 00:20:51.545 12001.773 - 12054.413: 36.2872% ( 84) 00:20:51.545 12054.413 - 12107.052: 36.9537% ( 61) 00:20:51.545 12107.052 - 12159.692: 37.5546% ( 55) 00:20:51.545 12159.692 - 12212.331: 38.3195% ( 70) 00:20:51.545 12212.331 - 12264.970: 39.0734% ( 69) 00:20:51.545 12264.970 - 12317.610: 39.7509% ( 62) 00:20:51.545 12317.610 - 12370.249: 40.5704% ( 75) 00:20:51.545 12370.249 - 12422.888: 41.4663% ( 82) 00:20:51.545 12422.888 - 12475.528: 42.2094% ( 68) 00:20:51.545 12475.528 - 12528.167: 42.8212% ( 56) 00:20:51.545 12528.167 - 12580.806: 43.4003% ( 53) 00:20:51.545 12580.806 - 12633.446: 44.0887% ( 63) 00:20:51.545 12633.446 - 12686.085: 44.6788% ( 54) 00:20:51.545 12686.085 - 12738.724: 45.2469% ( 52) 00:20:51.545 12738.724 - 12791.364: 45.8042% ( 51) 00:20:51.545 12791.364 - 12844.003: 46.4598% ( 60) 00:20:51.545 12844.003 - 12896.643: 47.0389% ( 53) 00:20:51.545 12896.643 - 12949.282: 47.3667% ( 30) 00:20:51.545 12949.282 - 13001.921: 47.8584% ( 45) 00:20:51.545 13001.921 - 13054.561: 48.2955% ( 40) 00:20:51.545 13054.561 - 13107.200: 48.7872% ( 45) 00:20:51.545 13107.200 - 13159.839: 49.5192% ( 67) 00:20:51.545 13159.839 - 13212.479: 50.1420% ( 57) 00:20:51.545 13212.479 - 13265.118: 50.9069% ( 70) 00:20:51.545 13265.118 - 13317.757: 51.5953% ( 63) 00:20:51.545 13317.757 - 13370.397: 52.2727% ( 62) 00:20:51.545 13370.397 - 13423.036: 52.7535% ( 44) 00:20:51.545 13423.036 - 13475.676: 53.2233% ( 43) 00:20:51.545 13475.676 - 13580.954: 54.3597% ( 104) 00:20:51.545 13580.954 - 13686.233: 55.1573% ( 73) 00:20:51.545 13686.233 - 13791.512: 55.8676% ( 65) 00:20:51.545 13791.512 - 13896.790: 56.5450% ( 62) 00:20:51.545 13896.790 - 14002.069: 57.3754% ( 76) 00:20:51.545 14002.069 - 14107.348: 58.2168% ( 77) 00:20:51.545 14107.348 - 14212.627: 58.9489% ( 67) 00:20:51.545 14212.627 - 14317.905: 59.5498% ( 55) 00:20:51.545 14317.905 - 14423.184: 60.5114% ( 88) 00:20:51.545 14423.184 - 14528.463: 61.4948% ( 90) 00:20:51.545 14528.463 - 14633.741: 62.8278% ( 122) 00:20:51.545 14633.741 - 14739.020: 64.1281% ( 119) 00:20:51.545 14739.020 - 14844.299: 65.1661% ( 95) 00:20:51.545 14844.299 - 14949.578: 66.5647% ( 128) 00:20:51.545 14949.578 - 15054.856: 68.1272% ( 143) 00:20:51.545 15054.856 - 15160.135: 69.6023% ( 135) 00:20:51.545 15160.135 - 15265.414: 70.8588% ( 115) 00:20:51.545 15265.414 - 15370.692: 72.2356% ( 126) 00:20:51.545 15370.692 - 15475.971: 73.4484% ( 111) 00:20:51.545 15475.971 - 15581.250: 74.5083% ( 97) 00:20:51.545 15581.250 - 15686.529: 75.4480% ( 86) 00:20:51.545 15686.529 - 15791.807: 76.0380% ( 54) 00:20:51.545 15791.807 - 15897.086: 76.7045% ( 61) 00:20:51.545 15897.086 - 16002.365: 77.4476% ( 68) 00:20:51.545 16002.365 - 16107.643: 78.2452% ( 73) 00:20:51.545 16107.643 - 16212.922: 79.0538% ( 74) 00:20:51.545 16212.922 - 16318.201: 79.5673% ( 47) 00:20:51.545 16318.201 - 16423.480: 80.1136% ( 50) 00:20:51.545 16423.480 - 16528.758: 80.5835% ( 43) 00:20:51.545 16528.758 - 16634.037: 81.0205% ( 40) 00:20:51.545 16634.037 - 16739.316: 81.4576% ( 40) 00:20:51.545 16739.316 - 16844.594: 81.8837% ( 39) 00:20:51.545 16844.594 - 16949.873: 82.2880% ( 37) 00:20:51.545 16949.873 - 17055.152: 82.8016% ( 47) 00:20:51.545 17055.152 - 17160.431: 83.3698% ( 52) 00:20:51.545 17160.431 - 17265.709: 83.9926% ( 57) 00:20:51.545 17265.709 - 17370.988: 84.6154% ( 57) 00:20:51.545 17370.988 - 17476.267: 85.2710% ( 60) 00:20:51.545 17476.267 - 17581.545: 86.0140% ( 68) 00:20:51.545 17581.545 - 17686.824: 86.9100% ( 82) 00:20:51.545 17686.824 - 17792.103: 87.5000% ( 54) 00:20:51.545 17792.103 - 17897.382: 88.0463% ( 50) 00:20:51.545 17897.382 - 18002.660: 88.6145% ( 52) 00:20:51.545 18002.660 - 18107.939: 89.1171% ( 46) 00:20:51.545 18107.939 - 18213.218: 89.5979% ( 44) 00:20:51.545 18213.218 - 18318.496: 90.1552% ( 51) 00:20:51.545 18318.496 - 18423.775: 90.9856% ( 76) 00:20:51.545 18423.775 - 18529.054: 91.5865% ( 55) 00:20:51.545 18529.054 - 18634.333: 91.9908% ( 37) 00:20:51.545 18634.333 - 18739.611: 92.5044% ( 47) 00:20:51.545 18739.611 - 18844.890: 93.0179% ( 47) 00:20:51.545 18844.890 - 18950.169: 93.6735% ( 60) 00:20:51.545 18950.169 - 19055.447: 94.3728% ( 64) 00:20:51.545 19055.447 - 19160.726: 94.7662% ( 36) 00:20:51.545 19160.726 - 19266.005: 95.0612% ( 27) 00:20:51.545 19266.005 - 19371.284: 95.4655% ( 37) 00:20:51.545 19371.284 - 19476.562: 95.9025% ( 40) 00:20:51.545 19476.562 - 19581.841: 96.4270% ( 48) 00:20:51.545 19581.841 - 19687.120: 96.9624% ( 49) 00:20:51.545 19687.120 - 19792.398: 97.4104% ( 41) 00:20:51.545 19792.398 - 19897.677: 97.9895% ( 53) 00:20:51.545 19897.677 - 20002.956: 98.2080% ( 20) 00:20:51.545 20002.956 - 20108.235: 98.3501% ( 13) 00:20:51.545 20108.235 - 20213.513: 98.4594% ( 10) 00:20:51.545 20213.513 - 20318.792: 98.5468% ( 8) 00:20:51.545 20318.792 - 20424.071: 98.6014% ( 5) 00:20:51.545 28635.810 - 28846.368: 98.7981% ( 18) 00:20:51.545 28846.368 - 29056.925: 98.8964% ( 9) 00:20:51.545 29056.925 - 29267.483: 98.9620% ( 6) 00:20:51.545 29267.483 - 29478.040: 99.0166% ( 5) 00:20:51.545 29478.040 - 29688.598: 99.0712% ( 5) 00:20:51.545 29688.598 - 29899.155: 99.1259% ( 5) 00:20:51.545 29899.155 - 30109.712: 99.1696% ( 4) 00:20:51.545 30109.712 - 30320.270: 99.2242% ( 5) 00:20:51.545 30320.270 - 30530.827: 99.3007% ( 7) 00:20:51.545 35794.763 - 36005.320: 99.3226% ( 2) 00:20:51.545 36005.320 - 36215.878: 99.3881% ( 6) 00:20:51.545 36215.878 - 36426.435: 99.4537% ( 6) 00:20:51.545 36426.435 - 36636.993: 99.5083% ( 5) 00:20:51.545 36636.993 - 36847.550: 99.5739% ( 6) 00:20:51.545 36847.550 - 37058.108: 99.6394% ( 6) 00:20:51.545 37058.108 - 37268.665: 99.6941% ( 5) 00:20:51.545 37268.665 - 37479.222: 99.7596% ( 6) 00:20:51.545 37479.222 - 37689.780: 99.8252% ( 6) 00:20:51.545 37689.780 - 37900.337: 99.8907% ( 6) 00:20:51.545 37900.337 - 38110.895: 99.9563% ( 6) 00:20:51.545 38110.895 - 38321.452: 100.0000% ( 4) 00:20:51.545 00:20:51.545 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:20:51.546 ============================================================================== 00:20:51.546 Range in us Cumulative IO count 00:20:51.546 9211.888 - 9264.527: 0.0109% ( 1) 00:20:51.546 9317.166 - 9369.806: 0.0656% ( 5) 00:20:51.546 9369.806 - 9422.445: 0.1858% ( 11) 00:20:51.546 9422.445 - 9475.084: 0.4152% ( 21) 00:20:51.546 9475.084 - 9527.724: 0.8851% ( 43) 00:20:51.546 9527.724 - 9580.363: 1.3440% ( 42) 00:20:51.546 9580.363 - 9633.002: 2.4038% ( 97) 00:20:51.546 9633.002 - 9685.642: 3.0157% ( 56) 00:20:51.546 9685.642 - 9738.281: 3.7150% ( 64) 00:20:51.546 9738.281 - 9790.920: 4.4034% ( 63) 00:20:51.546 9790.920 - 9843.560: 5.0481% ( 59) 00:20:51.546 9843.560 - 9896.199: 5.8457% ( 73) 00:20:51.546 9896.199 - 9948.839: 6.4685% ( 57) 00:20:51.546 9948.839 - 10001.478: 7.0586% ( 54) 00:20:51.546 10001.478 - 10054.117: 7.6923% ( 58) 00:20:51.546 10054.117 - 10106.757: 8.2933% ( 55) 00:20:51.546 10106.757 - 10159.396: 9.1346% ( 77) 00:20:51.546 10159.396 - 10212.035: 9.8121% ( 62) 00:20:51.546 10212.035 - 10264.675: 10.6862% ( 80) 00:20:51.546 10264.675 - 10317.314: 11.3746% ( 63) 00:20:51.546 10317.314 - 10369.953: 12.2159% ( 77) 00:20:51.546 10369.953 - 10422.593: 12.9589% ( 68) 00:20:51.546 10422.593 - 10475.232: 13.7347% ( 71) 00:20:51.546 10475.232 - 10527.871: 14.3466% ( 56) 00:20:51.546 10527.871 - 10580.511: 14.8601% ( 47) 00:20:51.546 10580.511 - 10633.150: 15.1770% ( 29) 00:20:51.546 10633.150 - 10685.790: 15.6031% ( 39) 00:20:51.546 10685.790 - 10738.429: 15.9637% ( 33) 00:20:51.546 10738.429 - 10791.068: 16.5647% ( 55) 00:20:51.546 10791.068 - 10843.708: 17.7010% ( 104) 00:20:51.546 10843.708 - 10896.347: 18.3676% ( 61) 00:20:51.546 10896.347 - 10948.986: 19.1543% ( 72) 00:20:51.546 10948.986 - 11001.626: 20.0721% ( 84) 00:20:51.546 11001.626 - 11054.265: 21.0446% ( 89) 00:20:51.546 11054.265 - 11106.904: 21.6018% ( 51) 00:20:51.546 11106.904 - 11159.544: 22.0389% ( 40) 00:20:51.546 11159.544 - 11212.183: 22.6508% ( 56) 00:20:51.546 11212.183 - 11264.822: 23.1971% ( 50) 00:20:51.546 11264.822 - 11317.462: 23.6560% ( 42) 00:20:51.546 11317.462 - 11370.101: 24.2351% ( 53) 00:20:51.546 11370.101 - 11422.741: 24.8142% ( 53) 00:20:51.546 11422.741 - 11475.380: 25.7758% ( 88) 00:20:51.546 11475.380 - 11528.019: 26.5953% ( 75) 00:20:51.546 11528.019 - 11580.659: 27.3601% ( 70) 00:20:51.546 11580.659 - 11633.298: 28.3654% ( 92) 00:20:51.546 11633.298 - 11685.937: 29.2177% ( 78) 00:20:51.546 11685.937 - 11738.577: 30.0918% ( 80) 00:20:51.546 11738.577 - 11791.216: 31.0205% ( 85) 00:20:51.546 11791.216 - 11843.855: 32.0258% ( 92) 00:20:51.546 11843.855 - 11896.495: 32.8781% ( 78) 00:20:51.546 11896.495 - 11949.134: 34.0581% ( 108) 00:20:51.546 11949.134 - 12001.773: 35.3147% ( 115) 00:20:51.546 12001.773 - 12054.413: 36.3855% ( 98) 00:20:51.546 12054.413 - 12107.052: 37.4454% ( 97) 00:20:51.546 12107.052 - 12159.692: 38.3413% ( 82) 00:20:51.546 12159.692 - 12212.331: 39.4449% ( 101) 00:20:51.546 12212.331 - 12264.970: 40.6141% ( 107) 00:20:51.546 12264.970 - 12317.610: 41.5538% ( 86) 00:20:51.546 12317.610 - 12370.249: 42.2094% ( 60) 00:20:51.546 12370.249 - 12422.888: 42.7994% ( 54) 00:20:51.546 12422.888 - 12475.528: 43.3348% ( 49) 00:20:51.546 12475.528 - 12528.167: 43.7391% ( 37) 00:20:51.546 12528.167 - 12580.806: 44.0559% ( 29) 00:20:51.546 12580.806 - 12633.446: 44.4602% ( 37) 00:20:51.546 12633.446 - 12686.085: 44.8317% ( 34) 00:20:51.546 12686.085 - 12738.724: 45.2032% ( 34) 00:20:51.546 12738.724 - 12791.364: 45.5966% ( 36) 00:20:51.546 12791.364 - 12844.003: 46.1101% ( 47) 00:20:51.546 12844.003 - 12896.643: 46.7220% ( 56) 00:20:51.546 12896.643 - 12949.282: 47.6071% ( 81) 00:20:51.546 12949.282 - 13001.921: 48.2955% ( 63) 00:20:51.546 13001.921 - 13054.561: 48.8199% ( 48) 00:20:51.546 13054.561 - 13107.200: 49.3007% ( 44) 00:20:51.546 13107.200 - 13159.839: 49.8689% ( 52) 00:20:51.546 13159.839 - 13212.479: 50.6228% ( 69) 00:20:51.546 13212.479 - 13265.118: 51.3112% ( 63) 00:20:51.546 13265.118 - 13317.757: 51.9012% ( 54) 00:20:51.546 13317.757 - 13370.397: 52.4476% ( 50) 00:20:51.546 13370.397 - 13423.036: 52.8518% ( 37) 00:20:51.546 13423.036 - 13475.676: 53.4528% ( 55) 00:20:51.546 13475.676 - 13580.954: 54.5345% ( 99) 00:20:51.546 13580.954 - 13686.233: 55.4961% ( 88) 00:20:51.546 13686.233 - 13791.512: 56.2609% ( 70) 00:20:51.546 13791.512 - 13896.790: 57.1678% ( 83) 00:20:51.546 13896.790 - 14002.069: 57.8890% ( 66) 00:20:51.546 14002.069 - 14107.348: 58.5774% ( 63) 00:20:51.546 14107.348 - 14212.627: 59.2657% ( 63) 00:20:51.546 14212.627 - 14317.905: 59.9323% ( 61) 00:20:51.546 14317.905 - 14423.184: 60.8501% ( 84) 00:20:51.546 14423.184 - 14528.463: 61.7461% ( 82) 00:20:51.546 14528.463 - 14633.741: 62.4672% ( 66) 00:20:51.546 14633.741 - 14739.020: 63.3851% ( 84) 00:20:51.546 14739.020 - 14844.299: 64.4340% ( 96) 00:20:51.546 14844.299 - 14949.578: 65.5922% ( 106) 00:20:51.546 14949.578 - 15054.856: 66.7067% ( 102) 00:20:51.546 15054.856 - 15160.135: 68.1053% ( 128) 00:20:51.546 15160.135 - 15265.414: 69.1543% ( 96) 00:20:51.546 15265.414 - 15370.692: 70.1814% ( 94) 00:20:51.546 15370.692 - 15475.971: 71.5144% ( 122) 00:20:51.546 15475.971 - 15581.250: 72.7273% ( 111) 00:20:51.546 15581.250 - 15686.529: 73.7216% ( 91) 00:20:51.546 15686.529 - 15791.807: 74.9891% ( 116) 00:20:51.546 15791.807 - 15897.086: 76.2347% ( 114) 00:20:51.546 15897.086 - 16002.365: 76.9996% ( 70) 00:20:51.546 16002.365 - 16107.643: 77.7535% ( 69) 00:20:51.546 16107.643 - 16212.922: 78.6276% ( 80) 00:20:51.546 16212.922 - 16318.201: 79.3925% ( 70) 00:20:51.546 16318.201 - 16423.480: 80.0262% ( 58) 00:20:51.546 16423.480 - 16528.758: 80.9987% ( 89) 00:20:51.546 16528.758 - 16634.037: 81.6652% ( 61) 00:20:51.546 16634.037 - 16739.316: 82.3754% ( 65) 00:20:51.546 16739.316 - 16844.594: 83.2277% ( 78) 00:20:51.546 16844.594 - 16949.873: 83.8833% ( 60) 00:20:51.546 16949.873 - 17055.152: 84.4733% ( 54) 00:20:51.546 17055.152 - 17160.431: 84.9650% ( 45) 00:20:51.546 17160.431 - 17265.709: 85.4567% ( 45) 00:20:51.546 17265.709 - 17370.988: 85.9594% ( 46) 00:20:51.546 17370.988 - 17476.267: 86.5385% ( 53) 00:20:51.546 17476.267 - 17581.545: 87.0739% ( 49) 00:20:51.546 17581.545 - 17686.824: 87.6093% ( 49) 00:20:51.546 17686.824 - 17792.103: 88.4943% ( 81) 00:20:51.546 17792.103 - 17897.382: 89.1390% ( 59) 00:20:51.546 17897.382 - 18002.660: 89.7181% ( 53) 00:20:51.546 18002.660 - 18107.939: 90.2426% ( 48) 00:20:51.546 18107.939 - 18213.218: 90.6578% ( 38) 00:20:51.546 18213.218 - 18318.496: 91.0730% ( 38) 00:20:51.546 18318.496 - 18423.775: 91.4991% ( 39) 00:20:51.546 18423.775 - 18529.054: 91.9580% ( 42) 00:20:51.546 18529.054 - 18634.333: 92.4279% ( 43) 00:20:51.546 18634.333 - 18739.611: 92.7557% ( 30) 00:20:51.546 18739.611 - 18844.890: 92.9961% ( 22) 00:20:51.546 18844.890 - 18950.169: 93.2474% ( 23) 00:20:51.546 18950.169 - 19055.447: 93.5424% ( 27) 00:20:51.546 19055.447 - 19160.726: 93.8265% ( 26) 00:20:51.546 19160.726 - 19266.005: 94.1324% ( 28) 00:20:51.546 19266.005 - 19371.284: 94.7006% ( 52) 00:20:51.546 19371.284 - 19476.562: 95.2032% ( 46) 00:20:51.546 19476.562 - 19581.841: 95.5310% ( 30) 00:20:51.546 19581.841 - 19687.120: 95.9025% ( 34) 00:20:51.546 19687.120 - 19792.398: 96.3287% ( 39) 00:20:51.546 19792.398 - 19897.677: 96.6892% ( 33) 00:20:51.546 19897.677 - 20002.956: 97.2465% ( 51) 00:20:51.546 20002.956 - 20108.235: 97.6945% ( 41) 00:20:51.546 20108.235 - 20213.513: 97.9458% ( 23) 00:20:51.546 20213.513 - 20318.792: 98.1862% ( 22) 00:20:51.546 20318.792 - 20424.071: 98.2955% ( 10) 00:20:51.546 20424.071 - 20529.349: 98.3392% ( 4) 00:20:51.546 20529.349 - 20634.628: 98.3829% ( 4) 00:20:51.546 20634.628 - 20739.907: 98.4266% ( 4) 00:20:51.546 20739.907 - 20845.186: 98.4812% ( 5) 00:20:51.546 20845.186 - 20950.464: 98.5249% ( 4) 00:20:51.546 20950.464 - 21055.743: 98.5795% ( 5) 00:20:51.547 21055.743 - 21161.022: 98.6014% ( 2) 00:20:51.547 26740.794 - 26846.072: 98.7434% ( 13) 00:20:51.547 26846.072 - 26951.351: 98.8527% ( 10) 00:20:51.547 26951.351 - 27161.908: 98.9292% ( 7) 00:20:51.547 27161.908 - 27372.466: 98.9838% ( 5) 00:20:51.547 27372.466 - 27583.023: 99.0385% ( 5) 00:20:51.547 27583.023 - 27793.581: 99.0931% ( 5) 00:20:51.547 27793.581 - 28004.138: 99.1587% ( 6) 00:20:51.547 28004.138 - 28214.696: 99.2133% ( 5) 00:20:51.547 28214.696 - 28425.253: 99.2679% ( 5) 00:20:51.547 28425.253 - 28635.810: 99.3007% ( 3) 00:20:51.547 32425.844 - 32636.402: 99.3116% ( 1) 00:20:51.547 32636.402 - 32846.959: 99.4427% ( 12) 00:20:51.547 32846.959 - 33057.516: 99.4537% ( 1) 00:20:51.547 34110.304 - 34320.861: 99.4974% ( 4) 00:20:51.547 34320.861 - 34531.418: 99.5520% ( 5) 00:20:51.547 34531.418 - 34741.976: 99.6066% ( 5) 00:20:51.547 34741.976 - 34952.533: 99.6613% ( 5) 00:20:51.547 34952.533 - 35163.091: 99.7159% ( 5) 00:20:51.547 35163.091 - 35373.648: 99.7815% ( 6) 00:20:51.547 35373.648 - 35584.206: 99.8361% ( 5) 00:20:51.547 35584.206 - 35794.763: 99.8907% ( 5) 00:20:51.547 35794.763 - 36005.320: 99.9563% ( 6) 00:20:51.547 36005.320 - 36215.878: 100.0000% ( 4) 00:20:51.547 00:20:51.547 16:35:27 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:20:51.547 00:20:51.547 real 0m2.775s 00:20:51.547 user 0m2.277s 00:20:51.547 sys 0m0.362s 00:20:51.547 16:35:27 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:51.547 ************************************ 00:20:51.547 END TEST nvme_perf 00:20:51.547 16:35:27 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:20:51.547 ************************************ 00:20:51.547 16:35:27 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:20:51.547 16:35:27 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:51.547 16:35:27 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:51.547 16:35:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:51.547 ************************************ 00:20:51.547 START TEST nvme_hello_world 00:20:51.547 ************************************ 00:20:51.547 16:35:27 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:20:51.806 Initializing NVMe Controllers 00:20:51.806 Attached to 0000:00:10.0 00:20:51.806 Namespace ID: 1 size: 6GB 00:20:51.806 Attached to 0000:00:11.0 00:20:51.806 Namespace ID: 1 size: 5GB 00:20:51.806 Attached to 0000:00:13.0 00:20:51.806 Namespace ID: 1 size: 1GB 00:20:51.806 Attached to 0000:00:12.0 00:20:51.806 Namespace ID: 1 size: 4GB 00:20:51.806 Namespace ID: 2 size: 4GB 00:20:51.806 Namespace ID: 3 size: 4GB 00:20:51.806 Initialization complete. 00:20:51.806 INFO: using host memory buffer for IO 00:20:51.806 Hello world! 00:20:51.806 INFO: using host memory buffer for IO 00:20:51.806 Hello world! 00:20:51.806 INFO: using host memory buffer for IO 00:20:51.806 Hello world! 00:20:51.806 INFO: using host memory buffer for IO 00:20:51.806 Hello world! 00:20:51.806 INFO: using host memory buffer for IO 00:20:51.806 Hello world! 00:20:51.806 INFO: using host memory buffer for IO 00:20:51.806 Hello world! 00:20:52.067 00:20:52.067 real 0m0.330s 00:20:52.067 user 0m0.117s 00:20:52.067 sys 0m0.161s 00:20:52.067 16:35:28 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:52.067 16:35:28 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:52.067 ************************************ 00:20:52.067 END TEST nvme_hello_world 00:20:52.067 ************************************ 00:20:52.067 16:35:28 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:20:52.067 16:35:28 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:52.067 16:35:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:52.067 16:35:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:52.067 ************************************ 00:20:52.067 START TEST nvme_sgl 00:20:52.067 ************************************ 00:20:52.067 16:35:28 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:20:52.329 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:20:52.329 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:20:52.329 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:20:52.329 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:20:52.329 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:20:52.329 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:20:52.329 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:20:52.329 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:20:52.329 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:20:52.329 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:20:52.329 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:20:52.329 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:20:52.329 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:20:52.329 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:20:52.329 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:20:52.329 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:20:52.329 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:20:52.329 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:20:52.329 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:20:52.329 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:20:52.329 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:20:52.329 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:20:52.329 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:20:52.329 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:20:52.329 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:20:52.329 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:20:52.329 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:20:52.329 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:20:52.329 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:20:52.329 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:20:52.329 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:20:52.329 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:20:52.329 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:20:52.329 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:20:52.329 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:20:52.329 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:20:52.329 NVMe Readv/Writev Request test 00:20:52.329 Attached to 0000:00:10.0 00:20:52.329 Attached to 0000:00:11.0 00:20:52.329 Attached to 0000:00:13.0 00:20:52.329 Attached to 0000:00:12.0 00:20:52.329 0000:00:10.0: build_io_request_2 test passed 00:20:52.329 0000:00:10.0: build_io_request_4 test passed 00:20:52.329 0000:00:10.0: build_io_request_5 test passed 00:20:52.329 0000:00:10.0: build_io_request_6 test passed 00:20:52.329 0000:00:10.0: build_io_request_7 test passed 00:20:52.329 0000:00:10.0: build_io_request_10 test passed 00:20:52.329 0000:00:11.0: build_io_request_2 test passed 00:20:52.329 0000:00:11.0: build_io_request_4 test passed 00:20:52.329 0000:00:11.0: build_io_request_5 test passed 00:20:52.329 0000:00:11.0: build_io_request_6 test passed 00:20:52.330 0000:00:11.0: build_io_request_7 test passed 00:20:52.330 0000:00:11.0: build_io_request_10 test passed 00:20:52.330 Cleaning up... 00:20:52.330 00:20:52.330 real 0m0.381s 00:20:52.330 user 0m0.182s 00:20:52.330 sys 0m0.149s 00:20:52.330 16:35:28 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:52.330 ************************************ 00:20:52.330 END TEST nvme_sgl 00:20:52.330 16:35:28 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:20:52.330 ************************************ 00:20:52.330 16:35:28 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:20:52.330 16:35:28 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:52.330 16:35:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:52.330 16:35:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:52.588 ************************************ 00:20:52.588 START TEST nvme_e2edp 00:20:52.588 ************************************ 00:20:52.588 16:35:28 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:20:52.845 NVMe Write/Read with End-to-End data protection test 00:20:52.845 Attached to 0000:00:10.0 00:20:52.845 Attached to 0000:00:11.0 00:20:52.845 Attached to 0000:00:13.0 00:20:52.845 Attached to 0000:00:12.0 00:20:52.845 Cleaning up... 00:20:52.845 ************************************ 00:20:52.845 END TEST nvme_e2edp 00:20:52.845 ************************************ 00:20:52.845 00:20:52.845 real 0m0.350s 00:20:52.845 user 0m0.114s 00:20:52.845 sys 0m0.175s 00:20:52.845 16:35:28 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:52.845 16:35:28 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:20:52.845 16:35:29 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:20:52.845 16:35:29 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:52.845 16:35:29 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:52.845 16:35:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:52.845 ************************************ 00:20:52.845 START TEST nvme_reserve 00:20:52.845 ************************************ 00:20:52.845 16:35:29 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:20:53.104 ===================================================== 00:20:53.104 NVMe Controller at PCI bus 0, device 16, function 0 00:20:53.104 ===================================================== 00:20:53.104 Reservations: Not Supported 00:20:53.104 ===================================================== 00:20:53.104 NVMe Controller at PCI bus 0, device 17, function 0 00:20:53.104 ===================================================== 00:20:53.104 Reservations: Not Supported 00:20:53.104 ===================================================== 00:20:53.104 NVMe Controller at PCI bus 0, device 19, function 0 00:20:53.104 ===================================================== 00:20:53.104 Reservations: Not Supported 00:20:53.104 ===================================================== 00:20:53.104 NVMe Controller at PCI bus 0, device 18, function 0 00:20:53.104 ===================================================== 00:20:53.104 Reservations: Not Supported 00:20:53.104 Reservation test passed 00:20:53.104 00:20:53.104 real 0m0.302s 00:20:53.104 user 0m0.098s 00:20:53.104 sys 0m0.160s 00:20:53.104 ************************************ 00:20:53.104 END TEST nvme_reserve 00:20:53.104 ************************************ 00:20:53.104 16:35:29 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:53.104 16:35:29 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:20:53.363 16:35:29 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:20:53.363 16:35:29 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:53.363 16:35:29 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:53.363 16:35:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:53.363 ************************************ 00:20:53.363 START TEST nvme_err_injection 00:20:53.363 ************************************ 00:20:53.363 16:35:29 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:20:53.621 NVMe Error Injection test 00:20:53.621 Attached to 0000:00:10.0 00:20:53.621 Attached to 0000:00:11.0 00:20:53.621 Attached to 0000:00:13.0 00:20:53.621 Attached to 0000:00:12.0 00:20:53.621 0000:00:13.0: get features failed as expected 00:20:53.621 0000:00:12.0: get features failed as expected 00:20:53.621 0000:00:10.0: get features failed as expected 00:20:53.621 0000:00:11.0: get features failed as expected 00:20:53.621 0000:00:11.0: get features successfully as expected 00:20:53.621 0000:00:13.0: get features successfully as expected 00:20:53.621 0000:00:12.0: get features successfully as expected 00:20:53.621 0000:00:10.0: get features successfully as expected 00:20:53.621 0000:00:10.0: read failed as expected 00:20:53.621 0000:00:11.0: read failed as expected 00:20:53.621 0000:00:13.0: read failed as expected 00:20:53.621 0000:00:12.0: read failed as expected 00:20:53.621 0000:00:10.0: read successfully as expected 00:20:53.621 0000:00:11.0: read successfully as expected 00:20:53.621 0000:00:13.0: read successfully as expected 00:20:53.621 0000:00:12.0: read successfully as expected 00:20:53.621 Cleaning up... 00:20:53.621 00:20:53.621 real 0m0.333s 00:20:53.621 user 0m0.118s 00:20:53.621 sys 0m0.161s 00:20:53.621 ************************************ 00:20:53.621 END TEST nvme_err_injection 00:20:53.621 ************************************ 00:20:53.622 16:35:29 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:53.622 16:35:29 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:20:53.622 16:35:29 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:20:53.622 16:35:29 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:20:53.622 16:35:29 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:53.622 16:35:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:53.622 ************************************ 00:20:53.622 START TEST nvme_overhead 00:20:53.622 ************************************ 00:20:53.622 16:35:29 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:20:55.028 Initializing NVMe Controllers 00:20:55.028 Attached to 0000:00:10.0 00:20:55.028 Attached to 0000:00:11.0 00:20:55.028 Attached to 0000:00:13.0 00:20:55.028 Attached to 0000:00:12.0 00:20:55.028 Initialization complete. Launching workers. 00:20:55.028 submit (in ns) avg, min, max = 14831.6, 11939.8, 159262.7 00:20:55.028 complete (in ns) avg, min, max = 9481.7, 7741.4, 435341.4 00:20:55.028 00:20:55.028 Submit histogram 00:20:55.028 ================ 00:20:55.028 Range in us Cumulative Count 00:20:55.028 11.926 - 11.978: 0.0285% ( 2) 00:20:55.028 12.337 - 12.389: 0.0428% ( 1) 00:20:55.028 12.389 - 12.440: 0.0713% ( 2) 00:20:55.028 12.440 - 12.492: 0.0999% ( 2) 00:20:55.028 12.492 - 12.543: 0.2282% ( 9) 00:20:55.028 12.543 - 12.594: 0.3281% ( 7) 00:20:55.028 12.594 - 12.646: 0.4280% ( 7) 00:20:55.028 12.646 - 12.697: 0.6419% ( 15) 00:20:55.028 12.697 - 12.749: 0.8559% ( 15) 00:20:55.028 12.749 - 12.800: 1.0984% ( 17) 00:20:55.028 12.800 - 12.851: 1.4693% ( 26) 00:20:55.028 12.851 - 12.903: 1.7546% ( 20) 00:20:55.028 12.903 - 12.954: 2.2539% ( 35) 00:20:55.028 12.954 - 13.006: 2.7532% ( 35) 00:20:55.028 13.006 - 13.057: 3.2810% ( 37) 00:20:55.028 13.057 - 13.108: 4.0514% ( 54) 00:20:55.028 13.108 - 13.160: 5.3352% ( 90) 00:20:55.028 13.160 - 13.263: 8.9016% ( 250) 00:20:55.028 13.263 - 13.365: 13.7375% ( 339) 00:20:55.028 13.365 - 13.468: 19.9001% ( 432) 00:20:55.028 13.468 - 13.571: 26.0057% ( 428) 00:20:55.028 13.571 - 13.674: 31.7118% ( 400) 00:20:55.028 13.674 - 13.777: 37.2753% ( 390) 00:20:55.028 13.777 - 13.880: 43.9800% ( 470) 00:20:55.028 13.880 - 13.982: 50.7561% ( 475) 00:20:55.028 13.982 - 14.085: 57.1469% ( 448) 00:20:55.028 14.085 - 14.188: 63.5093% ( 446) 00:20:55.028 14.188 - 14.291: 69.0442% ( 388) 00:20:55.028 14.291 - 14.394: 74.0656% ( 352) 00:20:55.028 14.394 - 14.496: 78.1740% ( 288) 00:20:55.028 14.496 - 14.599: 81.3695% ( 224) 00:20:55.028 14.599 - 14.702: 83.6234% ( 158) 00:20:55.028 14.702 - 14.805: 85.4636% ( 129) 00:20:55.028 14.805 - 14.908: 86.9187% ( 102) 00:20:55.028 14.908 - 15.010: 87.8174% ( 63) 00:20:55.028 15.010 - 15.113: 88.5592% ( 52) 00:20:55.028 15.113 - 15.216: 89.0014% ( 31) 00:20:55.028 15.216 - 15.319: 89.4437% ( 31) 00:20:55.028 15.319 - 15.422: 89.7432% ( 21) 00:20:55.028 15.422 - 15.524: 89.8573% ( 8) 00:20:55.028 15.524 - 15.627: 90.0856% ( 16) 00:20:55.028 15.627 - 15.730: 90.1854% ( 7) 00:20:55.028 15.730 - 15.833: 90.2996% ( 8) 00:20:55.028 15.833 - 15.936: 90.3709% ( 5) 00:20:55.028 15.936 - 16.039: 90.4850% ( 8) 00:20:55.028 16.039 - 16.141: 90.5278% ( 3) 00:20:55.028 16.141 - 16.244: 90.6277% ( 7) 00:20:55.028 16.244 - 16.347: 90.6419% ( 1) 00:20:55.028 16.347 - 16.450: 90.7561% ( 8) 00:20:55.028 16.450 - 16.553: 90.8559% ( 7) 00:20:55.028 16.553 - 16.655: 90.9986% ( 10) 00:20:55.028 16.655 - 16.758: 91.1840% ( 13) 00:20:55.028 16.758 - 16.861: 91.2411% ( 4) 00:20:55.028 16.861 - 16.964: 91.2553% ( 1) 00:20:55.028 16.964 - 17.067: 91.2981% ( 3) 00:20:55.028 17.067 - 17.169: 91.3552% ( 4) 00:20:55.028 17.169 - 17.272: 91.3837% ( 2) 00:20:55.028 17.272 - 17.375: 91.3980% ( 1) 00:20:55.028 17.375 - 17.478: 91.4693% ( 5) 00:20:55.028 17.478 - 17.581: 91.5407% ( 5) 00:20:55.028 17.581 - 17.684: 91.6262% ( 6) 00:20:55.028 17.684 - 17.786: 91.6690% ( 3) 00:20:55.028 17.786 - 17.889: 91.6976% ( 2) 00:20:55.028 17.889 - 17.992: 91.7832% ( 6) 00:20:55.028 17.992 - 18.095: 91.8117% ( 2) 00:20:55.028 18.095 - 18.198: 91.8402% ( 2) 00:20:55.028 18.198 - 18.300: 91.9258% ( 6) 00:20:55.028 18.300 - 18.403: 91.9686% ( 3) 00:20:55.028 18.403 - 18.506: 91.9971% ( 2) 00:20:55.028 18.506 - 18.609: 92.1113% ( 8) 00:20:55.028 18.609 - 18.712: 92.1683% ( 4) 00:20:55.028 18.712 - 18.814: 92.3110% ( 10) 00:20:55.028 18.814 - 18.917: 92.4536% ( 10) 00:20:55.028 18.917 - 19.020: 92.5820% ( 9) 00:20:55.028 19.020 - 19.123: 92.6391% ( 4) 00:20:55.028 19.123 - 19.226: 92.7532% ( 8) 00:20:55.028 19.226 - 19.329: 92.8531% ( 7) 00:20:55.028 19.329 - 19.431: 92.9957% ( 10) 00:20:55.028 19.431 - 19.534: 93.1954% ( 14) 00:20:55.028 19.534 - 19.637: 93.3096% ( 8) 00:20:55.028 19.637 - 19.740: 93.4665% ( 11) 00:20:55.028 19.740 - 19.843: 93.5378% ( 5) 00:20:55.028 19.843 - 19.945: 93.5806% ( 3) 00:20:55.029 19.945 - 20.048: 93.7233% ( 10) 00:20:55.029 20.048 - 20.151: 93.8659% ( 10) 00:20:55.029 20.151 - 20.254: 94.0371% ( 12) 00:20:55.029 20.254 - 20.357: 94.1655% ( 9) 00:20:55.029 20.357 - 20.459: 94.3367% ( 12) 00:20:55.029 20.459 - 20.562: 94.5221% ( 13) 00:20:55.029 20.562 - 20.665: 94.6505% ( 9) 00:20:55.029 20.665 - 20.768: 94.7504% ( 7) 00:20:55.029 20.768 - 20.871: 94.8502% ( 7) 00:20:55.029 20.871 - 20.973: 94.9643% ( 8) 00:20:55.029 20.973 - 21.076: 95.0357% ( 5) 00:20:55.029 21.076 - 21.179: 95.1641% ( 9) 00:20:55.029 21.179 - 21.282: 95.2354% ( 5) 00:20:55.029 21.282 - 21.385: 95.3495% ( 8) 00:20:55.029 21.385 - 21.488: 95.4208% ( 5) 00:20:55.029 21.488 - 21.590: 95.4922% ( 5) 00:20:55.029 21.590 - 21.693: 95.5920% ( 7) 00:20:55.029 21.693 - 21.796: 95.6348% ( 3) 00:20:55.029 21.796 - 21.899: 95.7204% ( 6) 00:20:55.029 21.899 - 22.002: 95.8060% ( 6) 00:20:55.029 22.002 - 22.104: 95.8345% ( 2) 00:20:55.029 22.104 - 22.207: 95.9344% ( 7) 00:20:55.029 22.207 - 22.310: 96.0200% ( 6) 00:20:55.029 22.310 - 22.413: 96.0770% ( 4) 00:20:55.029 22.413 - 22.516: 96.1626% ( 6) 00:20:55.029 22.516 - 22.618: 96.2625% ( 7) 00:20:55.029 22.618 - 22.721: 96.3195% ( 4) 00:20:55.029 22.721 - 22.824: 96.3481% ( 2) 00:20:55.029 22.824 - 22.927: 96.4051% ( 4) 00:20:55.029 22.927 - 23.030: 96.4907% ( 6) 00:20:55.029 23.133 - 23.235: 96.5193% ( 2) 00:20:55.029 23.235 - 23.338: 96.5763% ( 4) 00:20:55.029 23.338 - 23.441: 96.6191% ( 3) 00:20:55.029 23.441 - 23.544: 96.6334% ( 1) 00:20:55.029 23.544 - 23.647: 96.6619% ( 2) 00:20:55.029 23.749 - 23.852: 96.8046% ( 10) 00:20:55.029 23.955 - 24.058: 96.8474% ( 3) 00:20:55.029 24.058 - 24.161: 96.9187% ( 5) 00:20:55.029 24.161 - 24.263: 96.9757% ( 4) 00:20:55.029 24.263 - 24.366: 96.9900% ( 1) 00:20:55.029 24.366 - 24.469: 97.0185% ( 2) 00:20:55.029 24.469 - 24.572: 97.0471% ( 2) 00:20:55.029 24.572 - 24.675: 97.0756% ( 2) 00:20:55.029 24.675 - 24.778: 97.1184% ( 3) 00:20:55.029 24.778 - 24.880: 97.1612% ( 3) 00:20:55.029 24.880 - 24.983: 97.1897% ( 2) 00:20:55.029 24.983 - 25.086: 97.2183% ( 2) 00:20:55.029 25.086 - 25.189: 97.3039% ( 6) 00:20:55.029 25.189 - 25.292: 97.3324% ( 2) 00:20:55.029 25.292 - 25.394: 97.3894% ( 4) 00:20:55.029 25.394 - 25.497: 97.4465% ( 4) 00:20:55.029 25.497 - 25.600: 97.4608% ( 1) 00:20:55.029 25.600 - 25.703: 97.5178% ( 4) 00:20:55.029 25.703 - 25.806: 97.6034% ( 6) 00:20:55.029 25.806 - 25.908: 97.6462% ( 3) 00:20:55.029 25.908 - 26.011: 97.7175% ( 5) 00:20:55.029 26.011 - 26.114: 97.7603% ( 3) 00:20:55.029 26.114 - 26.217: 97.8031% ( 3) 00:20:55.029 26.217 - 26.320: 97.8317% ( 2) 00:20:55.029 26.320 - 26.525: 97.8745% ( 3) 00:20:55.029 26.525 - 26.731: 97.9458% ( 5) 00:20:55.029 26.731 - 26.937: 98.0029% ( 4) 00:20:55.029 26.937 - 27.142: 98.1455% ( 10) 00:20:55.029 27.142 - 27.348: 98.1883% ( 3) 00:20:55.029 27.348 - 27.553: 98.3310% ( 10) 00:20:55.029 27.553 - 27.759: 98.3880% ( 4) 00:20:55.029 27.759 - 27.965: 98.4023% ( 1) 00:20:55.029 27.965 - 28.170: 98.4165% ( 1) 00:20:55.029 28.170 - 28.376: 98.5021% ( 6) 00:20:55.029 28.376 - 28.582: 98.5164% ( 1) 00:20:55.029 28.582 - 28.787: 98.5307% ( 1) 00:20:55.029 28.787 - 28.993: 98.5449% ( 1) 00:20:55.029 28.993 - 29.198: 98.6163% ( 5) 00:20:55.029 29.198 - 29.404: 98.6448% ( 2) 00:20:55.029 29.404 - 29.610: 98.6733% ( 2) 00:20:55.029 29.610 - 29.815: 98.7019% ( 2) 00:20:55.029 29.815 - 30.021: 98.7447% ( 3) 00:20:55.029 30.021 - 30.227: 98.8302% ( 6) 00:20:55.029 30.227 - 30.432: 98.8873% ( 4) 00:20:55.029 30.432 - 30.638: 98.9301% ( 3) 00:20:55.029 30.638 - 30.843: 99.0157% ( 6) 00:20:55.029 30.843 - 31.049: 99.0442% ( 2) 00:20:55.029 31.049 - 31.255: 99.1155% ( 5) 00:20:55.029 31.255 - 31.460: 99.2011% ( 6) 00:20:55.029 31.460 - 31.666: 99.2582% ( 4) 00:20:55.029 31.666 - 31.871: 99.3581% ( 7) 00:20:55.029 31.871 - 32.077: 99.4294% ( 5) 00:20:55.029 32.077 - 32.283: 99.4437% ( 1) 00:20:55.029 32.283 - 32.488: 99.4579% ( 1) 00:20:55.029 32.488 - 32.694: 99.4864% ( 2) 00:20:55.029 32.694 - 32.900: 99.5007% ( 1) 00:20:55.029 33.311 - 33.516: 99.5150% ( 1) 00:20:55.029 33.516 - 33.722: 99.5435% ( 2) 00:20:55.029 33.928 - 34.133: 99.5578% ( 1) 00:20:55.029 34.133 - 34.339: 99.5720% ( 1) 00:20:55.029 34.339 - 34.545: 99.5863% ( 1) 00:20:55.029 34.750 - 34.956: 99.6006% ( 1) 00:20:55.029 35.367 - 35.573: 99.6148% ( 1) 00:20:55.029 35.984 - 36.190: 99.6291% ( 1) 00:20:55.029 36.395 - 36.601: 99.6434% ( 1) 00:20:55.029 36.806 - 37.012: 99.6862% ( 3) 00:20:55.029 37.012 - 37.218: 99.7004% ( 1) 00:20:55.029 37.423 - 37.629: 99.7147% ( 1) 00:20:55.029 38.451 - 38.657: 99.7290% ( 1) 00:20:55.029 38.863 - 39.068: 99.7432% ( 1) 00:20:55.029 39.274 - 39.480: 99.7575% ( 1) 00:20:55.029 39.480 - 39.685: 99.7718% ( 1) 00:20:55.029 40.713 - 40.919: 99.8003% ( 2) 00:20:55.029 41.124 - 41.330: 99.8146% ( 1) 00:20:55.029 41.741 - 41.947: 99.8288% ( 1) 00:20:55.029 41.947 - 42.153: 99.8431% ( 1) 00:20:55.029 45.237 - 45.443: 99.8573% ( 1) 00:20:55.029 46.676 - 46.882: 99.8716% ( 1) 00:20:55.029 49.349 - 49.555: 99.9001% ( 2) 00:20:55.029 52.228 - 52.434: 99.9144% ( 1) 00:20:55.029 54.696 - 55.107: 99.9287% ( 1) 00:20:55.029 56.341 - 56.752: 99.9572% ( 2) 00:20:55.029 62.509 - 62.920: 99.9715% ( 1) 00:20:55.029 117.616 - 118.439: 99.9857% ( 1) 00:20:55.029 158.741 - 159.563: 100.0000% ( 1) 00:20:55.029 00:20:55.029 Complete histogram 00:20:55.029 ================== 00:20:55.029 Range in us Cumulative Count 00:20:55.029 7.711 - 7.762: 0.0428% ( 3) 00:20:55.029 7.762 - 7.814: 0.3138% ( 19) 00:20:55.029 7.814 - 7.865: 0.9130% ( 42) 00:20:55.029 7.865 - 7.916: 2.2397% ( 93) 00:20:55.029 7.916 - 7.968: 3.6947% ( 102) 00:20:55.029 7.968 - 8.019: 5.0499% ( 95) 00:20:55.029 8.019 - 8.071: 6.6619% ( 113) 00:20:55.029 8.071 - 8.122: 8.6591% ( 140) 00:20:55.029 8.122 - 8.173: 10.3566% ( 119) 00:20:55.029 8.173 - 8.225: 12.1255% ( 124) 00:20:55.029 8.225 - 8.276: 14.3937% ( 159) 00:20:55.029 8.276 - 8.328: 17.6462% ( 228) 00:20:55.029 8.328 - 8.379: 22.1398% ( 315) 00:20:55.029 8.379 - 8.431: 27.0899% ( 347) 00:20:55.029 8.431 - 8.482: 32.4536% ( 376) 00:20:55.029 8.482 - 8.533: 37.8031% ( 375) 00:20:55.029 8.533 - 8.585: 43.1812% ( 377) 00:20:55.029 8.585 - 8.636: 48.4308% ( 368) 00:20:55.030 8.636 - 8.688: 52.9101% ( 314) 00:20:55.030 8.688 - 8.739: 56.9757% ( 285) 00:20:55.030 8.739 - 8.790: 60.5706% ( 252) 00:20:55.030 8.790 - 8.842: 64.2653% ( 259) 00:20:55.030 8.842 - 8.893: 67.5606% ( 231) 00:20:55.030 8.893 - 8.945: 70.1854% ( 184) 00:20:55.030 8.945 - 8.996: 72.4679% ( 160) 00:20:55.030 8.996 - 9.047: 74.4223% ( 137) 00:20:55.030 9.047 - 9.099: 76.4194% ( 140) 00:20:55.030 9.099 - 9.150: 77.9173% ( 105) 00:20:55.030 9.150 - 9.202: 79.5007% ( 111) 00:20:55.030 9.202 - 9.253: 80.6562% ( 81) 00:20:55.030 9.253 - 9.304: 81.9971% ( 94) 00:20:55.030 9.304 - 9.356: 82.9101% ( 64) 00:20:55.030 9.356 - 9.407: 83.7946% ( 62) 00:20:55.030 9.407 - 9.459: 84.4650% ( 47) 00:20:55.030 9.459 - 9.510: 85.0927% ( 44) 00:20:55.030 9.510 - 9.561: 85.6919% ( 42) 00:20:55.030 9.561 - 9.613: 86.2054% ( 36) 00:20:55.030 9.613 - 9.664: 86.7047% ( 35) 00:20:55.030 9.664 - 9.716: 87.2183% ( 36) 00:20:55.030 9.716 - 9.767: 87.6462% ( 30) 00:20:55.030 9.767 - 9.818: 88.1455% ( 35) 00:20:55.030 9.818 - 9.870: 88.4593% ( 22) 00:20:55.030 9.870 - 9.921: 88.9016% ( 31) 00:20:55.030 9.921 - 9.973: 89.2582% ( 25) 00:20:55.030 9.973 - 10.024: 89.5150% ( 18) 00:20:55.030 10.024 - 10.076: 89.8288% ( 22) 00:20:55.030 10.076 - 10.127: 90.0999% ( 19) 00:20:55.030 10.127 - 10.178: 90.3281% ( 16) 00:20:55.030 10.178 - 10.230: 90.4708% ( 10) 00:20:55.030 10.230 - 10.281: 90.7133% ( 17) 00:20:55.030 10.281 - 10.333: 90.8845% ( 12) 00:20:55.030 10.333 - 10.384: 91.1127% ( 16) 00:20:55.030 10.384 - 10.435: 91.2696% ( 11) 00:20:55.030 10.435 - 10.487: 91.4265% ( 11) 00:20:55.030 10.487 - 10.538: 91.5692% ( 10) 00:20:55.030 10.538 - 10.590: 91.6833% ( 8) 00:20:55.030 10.590 - 10.641: 91.8402% ( 11) 00:20:55.030 10.641 - 10.692: 92.0114% ( 12) 00:20:55.030 10.692 - 10.744: 92.1541% ( 10) 00:20:55.030 10.744 - 10.795: 92.2825% ( 9) 00:20:55.030 10.795 - 10.847: 92.3823% ( 7) 00:20:55.030 10.847 - 10.898: 92.4394% ( 4) 00:20:55.030 10.898 - 10.949: 92.5820% ( 10) 00:20:55.030 10.949 - 11.001: 92.6676% ( 6) 00:20:55.030 11.001 - 11.052: 92.7675% ( 7) 00:20:55.030 11.052 - 11.104: 92.8673% ( 7) 00:20:55.030 11.104 - 11.155: 92.9244% ( 4) 00:20:55.030 11.155 - 11.206: 93.0385% ( 8) 00:20:55.030 11.206 - 11.258: 93.1241% ( 6) 00:20:55.030 11.258 - 11.309: 93.2382% ( 8) 00:20:55.030 11.309 - 11.361: 93.3096% ( 5) 00:20:55.030 11.361 - 11.412: 93.3524% ( 3) 00:20:55.030 11.412 - 11.463: 93.3951% ( 3) 00:20:55.030 11.463 - 11.515: 93.4237% ( 2) 00:20:55.030 11.515 - 11.566: 93.4379% ( 1) 00:20:55.030 11.566 - 11.618: 93.5093% ( 5) 00:20:55.030 11.618 - 11.669: 93.5663% ( 4) 00:20:55.030 11.669 - 11.720: 93.5806% ( 1) 00:20:55.030 11.720 - 11.772: 93.6091% ( 2) 00:20:55.030 11.772 - 11.823: 93.7090% ( 7) 00:20:55.030 11.823 - 11.875: 93.7518% ( 3) 00:20:55.030 11.875 - 11.926: 93.7803% ( 2) 00:20:55.030 11.926 - 11.978: 93.8231% ( 3) 00:20:55.030 11.978 - 12.029: 93.8659% ( 3) 00:20:55.030 12.029 - 12.080: 93.8944% ( 2) 00:20:55.030 12.080 - 12.132: 93.9372% ( 3) 00:20:55.030 12.132 - 12.183: 93.9800% ( 3) 00:20:55.030 12.183 - 12.235: 94.0228% ( 3) 00:20:55.030 12.235 - 12.286: 94.0371% ( 1) 00:20:55.030 12.286 - 12.337: 94.0656% ( 2) 00:20:55.030 12.337 - 12.389: 94.0942% ( 2) 00:20:55.030 12.389 - 12.440: 94.1227% ( 2) 00:20:55.030 12.440 - 12.492: 94.1512% ( 2) 00:20:55.030 12.492 - 12.543: 94.1797% ( 2) 00:20:55.030 12.594 - 12.646: 94.2225% ( 3) 00:20:55.030 12.646 - 12.697: 94.2368% ( 1) 00:20:55.030 12.697 - 12.749: 94.2511% ( 1) 00:20:55.030 12.749 - 12.800: 94.2796% ( 2) 00:20:55.030 12.800 - 12.851: 94.2939% ( 1) 00:20:55.030 12.903 - 12.954: 94.3224% ( 2) 00:20:55.030 12.954 - 13.006: 94.3652% ( 3) 00:20:55.030 13.006 - 13.057: 94.3795% ( 1) 00:20:55.030 13.057 - 13.108: 94.3937% ( 1) 00:20:55.030 13.108 - 13.160: 94.4223% ( 2) 00:20:55.030 13.160 - 13.263: 94.4365% ( 1) 00:20:55.030 13.263 - 13.365: 94.4936% ( 4) 00:20:55.030 13.468 - 13.571: 94.5221% ( 2) 00:20:55.030 13.571 - 13.674: 94.5649% ( 3) 00:20:55.030 13.674 - 13.777: 94.5934% ( 2) 00:20:55.030 13.777 - 13.880: 94.6220% ( 2) 00:20:55.030 13.880 - 13.982: 94.6362% ( 1) 00:20:55.030 13.982 - 14.085: 94.6648% ( 2) 00:20:55.030 14.085 - 14.188: 94.6790% ( 1) 00:20:55.030 14.188 - 14.291: 94.7646% ( 6) 00:20:55.030 14.291 - 14.394: 94.8217% ( 4) 00:20:55.030 14.394 - 14.496: 94.8502% ( 2) 00:20:55.030 14.496 - 14.599: 94.9215% ( 5) 00:20:55.030 14.599 - 14.702: 94.9786% ( 4) 00:20:55.030 14.702 - 14.805: 95.0214% ( 3) 00:20:55.030 14.805 - 14.908: 95.1070% ( 6) 00:20:55.030 14.908 - 15.010: 95.1498% ( 3) 00:20:55.030 15.010 - 15.113: 95.1783% ( 2) 00:20:55.030 15.113 - 15.216: 95.2924% ( 8) 00:20:55.030 15.216 - 15.319: 95.4351% ( 10) 00:20:55.030 15.319 - 15.422: 95.4636% ( 2) 00:20:55.030 15.422 - 15.524: 95.5635% ( 7) 00:20:55.030 15.524 - 15.627: 95.6348% ( 5) 00:20:55.030 15.627 - 15.730: 95.7489% ( 8) 00:20:55.030 15.730 - 15.833: 95.8345% ( 6) 00:20:55.030 15.833 - 15.936: 95.9058% ( 5) 00:20:55.030 15.936 - 16.039: 95.9914% ( 6) 00:20:55.030 16.039 - 16.141: 96.0485% ( 4) 00:20:55.030 16.141 - 16.244: 96.0770% ( 2) 00:20:55.030 16.244 - 16.347: 96.1056% ( 2) 00:20:55.030 16.347 - 16.450: 96.2054% ( 7) 00:20:55.030 16.450 - 16.553: 96.2340% ( 2) 00:20:55.030 16.553 - 16.655: 96.2767% ( 3) 00:20:55.030 16.655 - 16.758: 96.3338% ( 4) 00:20:55.030 16.758 - 16.861: 96.3766% ( 3) 00:20:55.030 16.861 - 16.964: 96.4051% ( 2) 00:20:55.030 16.964 - 17.067: 96.4337% ( 2) 00:20:55.030 17.067 - 17.169: 96.4907% ( 4) 00:20:55.030 17.169 - 17.272: 96.5335% ( 3) 00:20:55.030 17.272 - 17.375: 96.5621% ( 2) 00:20:55.030 17.375 - 17.478: 96.6049% ( 3) 00:20:55.030 17.478 - 17.581: 96.6619% ( 4) 00:20:55.030 17.581 - 17.684: 96.7190% ( 4) 00:20:55.030 17.684 - 17.786: 96.7332% ( 1) 00:20:55.030 17.786 - 17.889: 96.7618% ( 2) 00:20:55.030 17.889 - 17.992: 96.7760% ( 1) 00:20:55.030 17.992 - 18.095: 96.7903% ( 1) 00:20:55.030 18.095 - 18.198: 96.8474% ( 4) 00:20:55.030 18.198 - 18.300: 96.8616% ( 1) 00:20:55.030 18.300 - 18.403: 96.8759% ( 1) 00:20:55.030 18.506 - 18.609: 96.8902% ( 1) 00:20:55.030 18.609 - 18.712: 96.9330% ( 3) 00:20:55.030 18.712 - 18.814: 96.9615% ( 2) 00:20:55.030 18.814 - 18.917: 96.9757% ( 1) 00:20:55.030 18.917 - 19.020: 96.9900% ( 1) 00:20:55.030 19.020 - 19.123: 97.0043% ( 1) 00:20:55.030 19.123 - 19.226: 97.0471% ( 3) 00:20:55.030 19.226 - 19.329: 97.1184% ( 5) 00:20:55.030 19.431 - 19.534: 97.1469% ( 2) 00:20:55.030 19.637 - 19.740: 97.1612% ( 1) 00:20:55.030 19.740 - 19.843: 97.1897% ( 2) 00:20:55.030 19.843 - 19.945: 97.2040% ( 1) 00:20:55.030 19.945 - 20.048: 97.2183% ( 1) 00:20:55.030 20.048 - 20.151: 97.3039% ( 6) 00:20:55.030 20.151 - 20.254: 97.3752% ( 5) 00:20:55.030 20.254 - 20.357: 97.4465% ( 5) 00:20:55.030 20.357 - 20.459: 97.5464% ( 7) 00:20:55.030 20.459 - 20.562: 97.6462% ( 7) 00:20:55.030 20.562 - 20.665: 97.7033% ( 4) 00:20:55.030 20.665 - 20.768: 97.8031% ( 7) 00:20:55.030 20.768 - 20.871: 97.8745% ( 5) 00:20:55.030 20.871 - 20.973: 97.9458% ( 5) 00:20:55.030 20.973 - 21.076: 98.0171% ( 5) 00:20:55.030 21.076 - 21.179: 98.0599% ( 3) 00:20:55.030 21.179 - 21.282: 98.1170% ( 4) 00:20:55.030 21.282 - 21.385: 98.1598% ( 3) 00:20:55.030 21.385 - 21.488: 98.2311% ( 5) 00:20:55.030 21.488 - 21.590: 98.3024% ( 5) 00:20:55.030 21.590 - 21.693: 98.3738% ( 5) 00:20:55.030 21.693 - 21.796: 98.3880% ( 1) 00:20:55.030 21.796 - 21.899: 98.4451% ( 4) 00:20:55.030 21.899 - 22.002: 98.5449% ( 7) 00:20:55.030 22.002 - 22.104: 98.5877% ( 3) 00:20:55.030 22.104 - 22.207: 98.6876% ( 7) 00:20:55.030 22.207 - 22.310: 98.7589% ( 5) 00:20:55.030 22.310 - 22.413: 98.7874% ( 2) 00:20:55.030 22.413 - 22.516: 98.8160% ( 2) 00:20:55.030 22.516 - 22.618: 98.8302% ( 1) 00:20:55.030 22.618 - 22.721: 98.8730% ( 3) 00:20:55.030 22.927 - 23.030: 98.9016% ( 2) 00:20:55.030 23.030 - 23.133: 98.9158% ( 1) 00:20:55.030 23.133 - 23.235: 98.9301% ( 1) 00:20:55.030 23.235 - 23.338: 98.9444% ( 1) 00:20:55.030 23.338 - 23.441: 98.9586% ( 1) 00:20:55.030 23.544 - 23.647: 98.9872% ( 2) 00:20:55.030 23.749 - 23.852: 99.0157% ( 2) 00:20:55.030 23.852 - 23.955: 99.0585% ( 3) 00:20:55.030 24.572 - 24.675: 99.0870% ( 2) 00:20:55.030 24.675 - 24.778: 99.1013% ( 1) 00:20:55.030 24.778 - 24.880: 99.1155% ( 1) 00:20:55.030 24.983 - 25.086: 99.1298% ( 1) 00:20:55.030 25.086 - 25.189: 99.1441% ( 1) 00:20:55.030 25.189 - 25.292: 99.1726% ( 2) 00:20:55.030 25.292 - 25.394: 99.1869% ( 1) 00:20:55.030 25.394 - 25.497: 99.2297% ( 3) 00:20:55.030 25.600 - 25.703: 99.2725% ( 3) 00:20:55.031 25.703 - 25.806: 99.3153% ( 3) 00:20:55.031 25.806 - 25.908: 99.3581% ( 3) 00:20:55.031 25.908 - 26.011: 99.3866% ( 2) 00:20:55.031 26.011 - 26.114: 99.4151% ( 2) 00:20:55.031 26.114 - 26.217: 99.4294% ( 1) 00:20:55.031 26.217 - 26.320: 99.4437% ( 1) 00:20:55.031 26.320 - 26.525: 99.5150% ( 5) 00:20:55.031 26.525 - 26.731: 99.5435% ( 2) 00:20:55.031 26.731 - 26.937: 99.6148% ( 5) 00:20:55.031 26.937 - 27.142: 99.6291% ( 1) 00:20:55.031 27.142 - 27.348: 99.6576% ( 2) 00:20:55.031 27.553 - 27.759: 99.6862% ( 2) 00:20:55.031 27.759 - 27.965: 99.7004% ( 1) 00:20:55.031 28.170 - 28.376: 99.7147% ( 1) 00:20:55.031 28.582 - 28.787: 99.7290% ( 1) 00:20:55.031 28.993 - 29.198: 99.7432% ( 1) 00:20:55.031 30.843 - 31.049: 99.7575% ( 1) 00:20:55.031 31.255 - 31.460: 99.7718% ( 1) 00:20:55.031 31.666 - 31.871: 99.7860% ( 1) 00:20:55.031 32.283 - 32.488: 99.8003% ( 1) 00:20:55.031 32.488 - 32.694: 99.8146% ( 1) 00:20:55.031 33.722 - 33.928: 99.8288% ( 1) 00:20:55.031 34.133 - 34.339: 99.8431% ( 1) 00:20:55.031 34.545 - 34.750: 99.8573% ( 1) 00:20:55.031 36.601 - 36.806: 99.8716% ( 1) 00:20:55.031 36.806 - 37.012: 99.8859% ( 1) 00:20:55.031 38.040 - 38.246: 99.9001% ( 1) 00:20:55.031 48.321 - 48.527: 99.9144% ( 1) 00:20:55.031 51.817 - 52.022: 99.9287% ( 1) 00:20:55.031 53.462 - 53.873: 99.9429% ( 1) 00:20:55.031 55.107 - 55.518: 99.9572% ( 1) 00:20:55.031 68.678 - 69.089: 99.9715% ( 1) 00:20:55.031 107.746 - 108.569: 99.9857% ( 1) 00:20:55.031 434.275 - 437.565: 100.0000% ( 1) 00:20:55.031 00:20:55.031 ************************************ 00:20:55.031 END TEST nvme_overhead 00:20:55.031 ************************************ 00:20:55.031 00:20:55.031 real 0m1.294s 00:20:55.031 user 0m1.093s 00:20:55.031 sys 0m0.151s 00:20:55.031 16:35:31 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:55.031 16:35:31 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:20:55.031 16:35:31 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:20:55.031 16:35:31 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:20:55.031 16:35:31 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:55.031 16:35:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:55.031 ************************************ 00:20:55.031 START TEST nvme_arbitration 00:20:55.031 ************************************ 00:20:55.031 16:35:31 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:20:59.244 Initializing NVMe Controllers 00:20:59.244 Attached to 0000:00:10.0 00:20:59.244 Attached to 0000:00:11.0 00:20:59.244 Attached to 0000:00:13.0 00:20:59.244 Attached to 0000:00:12.0 00:20:59.244 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:20:59.244 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:20:59.244 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:20:59.244 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:20:59.244 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:20:59.244 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:20:59.244 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:20:59.244 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:20:59.244 Initialization complete. Launching workers. 00:20:59.244 Starting thread on core 1 with urgent priority queue 00:20:59.244 Starting thread on core 2 with urgent priority queue 00:20:59.244 Starting thread on core 3 with urgent priority queue 00:20:59.244 Starting thread on core 0 with urgent priority queue 00:20:59.244 QEMU NVMe Ctrl (12340 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:20:59.244 QEMU NVMe Ctrl (12342 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:20:59.244 QEMU NVMe Ctrl (12341 ) core 1: 597.33 IO/s 167.41 secs/100000 ios 00:20:59.244 QEMU NVMe Ctrl (12342 ) core 1: 597.33 IO/s 167.41 secs/100000 ios 00:20:59.244 QEMU NVMe Ctrl (12343 ) core 2: 618.67 IO/s 161.64 secs/100000 ios 00:20:59.244 QEMU NVMe Ctrl (12342 ) core 3: 512.00 IO/s 195.31 secs/100000 ios 00:20:59.244 ======================================================== 00:20:59.244 00:20:59.244 00:20:59.244 real 0m3.479s 00:20:59.244 user 0m9.415s 00:20:59.244 sys 0m0.172s 00:20:59.244 16:35:34 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:59.244 ************************************ 00:20:59.244 END TEST nvme_arbitration 00:20:59.244 ************************************ 00:20:59.244 16:35:34 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:20:59.244 16:35:34 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:20:59.244 16:35:34 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:59.244 16:35:34 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:59.244 16:35:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:59.244 ************************************ 00:20:59.244 START TEST nvme_single_aen 00:20:59.244 ************************************ 00:20:59.244 16:35:34 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:20:59.244 Asynchronous Event Request test 00:20:59.244 Attached to 0000:00:10.0 00:20:59.244 Attached to 0000:00:11.0 00:20:59.244 Attached to 0000:00:13.0 00:20:59.244 Attached to 0000:00:12.0 00:20:59.244 Reset controller to setup AER completions for this process 00:20:59.244 Registering asynchronous event callbacks... 00:20:59.244 Getting orig temperature thresholds of all controllers 00:20:59.244 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:20:59.244 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:20:59.244 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:20:59.244 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:20:59.244 Setting all controllers temperature threshold low to trigger AER 00:20:59.244 Waiting for all controllers temperature threshold to be set lower 00:20:59.244 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:20:59.244 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:20:59.244 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:20:59.244 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:20:59.244 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:20:59.244 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:20:59.244 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:20:59.244 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:20:59.244 Waiting for all controllers to trigger AER and reset threshold 00:20:59.244 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:20:59.244 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:20:59.244 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:20:59.244 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:20:59.244 Cleaning up... 00:20:59.244 00:20:59.244 real 0m0.301s 00:20:59.244 user 0m0.108s 00:20:59.244 sys 0m0.146s 00:20:59.244 16:35:35 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:59.244 ************************************ 00:20:59.244 END TEST nvme_single_aen 00:20:59.244 ************************************ 00:20:59.244 16:35:35 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:20:59.244 16:35:35 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:20:59.244 16:35:35 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:20:59.244 16:35:35 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:59.244 16:35:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:20:59.244 ************************************ 00:20:59.244 START TEST nvme_doorbell_aers 00:20:59.244 ************************************ 00:20:59.244 16:35:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:20:59.244 16:35:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:20:59.244 16:35:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:20:59.244 16:35:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:20:59.244 16:35:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:20:59.244 16:35:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:20:59.244 16:35:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:20:59.244 16:35:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:59.244 16:35:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:59.244 16:35:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:20:59.244 16:35:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:20:59.244 16:35:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:59.244 16:35:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:20:59.244 16:35:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:59.504 [2024-10-17 16:35:35.560315] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:09.480 Executing: test_write_invalid_db 00:21:09.480 Waiting for AER completion... 00:21:09.480 Failure: test_write_invalid_db 00:21:09.480 00:21:09.480 Executing: test_invalid_db_write_overflow_sq 00:21:09.480 Waiting for AER completion... 00:21:09.480 Failure: test_invalid_db_write_overflow_sq 00:21:09.480 00:21:09.480 Executing: test_invalid_db_write_overflow_cq 00:21:09.480 Waiting for AER completion... 00:21:09.480 Failure: test_invalid_db_write_overflow_cq 00:21:09.480 00:21:09.480 16:35:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:21:09.480 16:35:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:21:09.480 [2024-10-17 16:35:45.639407] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:19.457 Executing: test_write_invalid_db 00:21:19.457 Waiting for AER completion... 00:21:19.457 Failure: test_write_invalid_db 00:21:19.457 00:21:19.457 Executing: test_invalid_db_write_overflow_sq 00:21:19.457 Waiting for AER completion... 00:21:19.457 Failure: test_invalid_db_write_overflow_sq 00:21:19.457 00:21:19.457 Executing: test_invalid_db_write_overflow_cq 00:21:19.457 Waiting for AER completion... 00:21:19.457 Failure: test_invalid_db_write_overflow_cq 00:21:19.457 00:21:19.457 16:35:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:21:19.457 16:35:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:21:19.457 [2024-10-17 16:35:55.669440] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:29.445 Executing: test_write_invalid_db 00:21:29.445 Waiting for AER completion... 00:21:29.445 Failure: test_write_invalid_db 00:21:29.445 00:21:29.445 Executing: test_invalid_db_write_overflow_sq 00:21:29.445 Waiting for AER completion... 00:21:29.445 Failure: test_invalid_db_write_overflow_sq 00:21:29.445 00:21:29.445 Executing: test_invalid_db_write_overflow_cq 00:21:29.445 Waiting for AER completion... 00:21:29.445 Failure: test_invalid_db_write_overflow_cq 00:21:29.445 00:21:29.445 16:36:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:21:29.445 16:36:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:21:29.445 [2024-10-17 16:36:05.718683] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:39.425 Executing: test_write_invalid_db 00:21:39.425 Waiting for AER completion... 00:21:39.425 Failure: test_write_invalid_db 00:21:39.425 00:21:39.425 Executing: test_invalid_db_write_overflow_sq 00:21:39.425 Waiting for AER completion... 00:21:39.425 Failure: test_invalid_db_write_overflow_sq 00:21:39.425 00:21:39.425 Executing: test_invalid_db_write_overflow_cq 00:21:39.425 Waiting for AER completion... 00:21:39.425 Failure: test_invalid_db_write_overflow_cq 00:21:39.425 00:21:39.425 00:21:39.425 real 0m40.325s 00:21:39.425 user 0m28.293s 00:21:39.425 sys 0m11.603s 00:21:39.425 16:36:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:39.425 16:36:15 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:21:39.425 ************************************ 00:21:39.425 END TEST nvme_doorbell_aers 00:21:39.425 ************************************ 00:21:39.425 16:36:15 nvme -- nvme/nvme.sh@97 -- # uname 00:21:39.425 16:36:15 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:21:39.425 16:36:15 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:21:39.425 16:36:15 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:21:39.425 16:36:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:39.425 16:36:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:39.425 ************************************ 00:21:39.425 START TEST nvme_multi_aen 00:21:39.425 ************************************ 00:21:39.425 16:36:15 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:21:39.684 [2024-10-17 16:36:15.819483] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:39.684 [2024-10-17 16:36:15.819858] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:39.684 [2024-10-17 16:36:15.819902] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:39.684 [2024-10-17 16:36:15.822099] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:39.684 [2024-10-17 16:36:15.822161] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:39.684 [2024-10-17 16:36:15.822184] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:39.684 [2024-10-17 16:36:15.823970] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:39.684 [2024-10-17 16:36:15.824027] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:39.684 [2024-10-17 16:36:15.824052] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:39.684 [2024-10-17 16:36:15.825881] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:39.684 [2024-10-17 16:36:15.825939] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:39.684 [2024-10-17 16:36:15.825963] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:21:39.684 Child process pid: 65006 00:21:39.943 [Child] Asynchronous Event Request test 00:21:39.943 [Child] Attached to 0000:00:10.0 00:21:39.943 [Child] Attached to 0000:00:11.0 00:21:39.943 [Child] Attached to 0000:00:13.0 00:21:39.943 [Child] Attached to 0000:00:12.0 00:21:39.943 [Child] Registering asynchronous event callbacks... 00:21:39.943 [Child] Getting orig temperature thresholds of all controllers 00:21:39.943 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:39.943 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:39.943 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:39.943 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:39.943 [Child] Waiting for all controllers to trigger AER and reset threshold 00:21:39.943 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:39.943 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:39.943 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:39.943 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:39.943 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:39.943 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:39.943 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:39.943 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:39.943 [Child] Cleaning up... 00:21:39.943 Asynchronous Event Request test 00:21:39.943 Attached to 0000:00:10.0 00:21:39.943 Attached to 0000:00:11.0 00:21:39.943 Attached to 0000:00:13.0 00:21:39.943 Attached to 0000:00:12.0 00:21:39.943 Reset controller to setup AER completions for this process 00:21:39.943 Registering asynchronous event callbacks... 00:21:39.943 Getting orig temperature thresholds of all controllers 00:21:39.943 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:39.943 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:39.943 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:39.943 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:21:39.943 Setting all controllers temperature threshold low to trigger AER 00:21:39.943 Waiting for all controllers temperature threshold to be set lower 00:21:39.943 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:39.943 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:21:39.943 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:39.943 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:21:39.943 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:39.943 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:21:39.943 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:21:39.943 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:21:39.943 Waiting for all controllers to trigger AER and reset threshold 00:21:39.943 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:39.943 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:39.943 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:39.943 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:21:39.943 Cleaning up... 00:21:39.943 00:21:39.943 real 0m0.660s 00:21:39.943 user 0m0.237s 00:21:39.943 sys 0m0.314s 00:21:39.943 16:36:16 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:39.943 16:36:16 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:21:39.943 ************************************ 00:21:39.943 END TEST nvme_multi_aen 00:21:39.943 ************************************ 00:21:40.203 16:36:16 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:21:40.203 16:36:16 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:40.203 16:36:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:40.203 16:36:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:40.203 ************************************ 00:21:40.203 START TEST nvme_startup 00:21:40.203 ************************************ 00:21:40.203 16:36:16 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:21:40.462 Initializing NVMe Controllers 00:21:40.462 Attached to 0000:00:10.0 00:21:40.462 Attached to 0000:00:11.0 00:21:40.462 Attached to 0000:00:13.0 00:21:40.462 Attached to 0000:00:12.0 00:21:40.462 Initialization complete. 00:21:40.462 Time used:190237.203 (us). 00:21:40.462 00:21:40.462 real 0m0.291s 00:21:40.462 user 0m0.097s 00:21:40.462 sys 0m0.146s 00:21:40.462 16:36:16 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:40.462 16:36:16 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:21:40.462 ************************************ 00:21:40.462 END TEST nvme_startup 00:21:40.462 ************************************ 00:21:40.462 16:36:16 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:21:40.462 16:36:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:40.462 16:36:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:40.462 16:36:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:40.462 ************************************ 00:21:40.462 START TEST nvme_multi_secondary 00:21:40.462 ************************************ 00:21:40.462 16:36:16 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:21:40.462 16:36:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65062 00:21:40.462 16:36:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:21:40.462 16:36:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:21:40.462 16:36:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65063 00:21:40.462 16:36:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:21:43.789 Initializing NVMe Controllers 00:21:43.789 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:43.789 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:21:43.789 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:21:43.789 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:21:43.789 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:21:43.789 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:21:43.789 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:21:43.789 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:21:43.789 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:21:43.789 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:21:43.789 Initialization complete. Launching workers. 00:21:43.789 ======================================================== 00:21:43.789 Latency(us) 00:21:43.789 Device Information : IOPS MiB/s Average min max 00:21:43.789 PCIE (0000:00:10.0) NSID 1 from core 1: 4858.93 18.98 3290.51 1188.78 7556.65 00:21:43.789 PCIE (0000:00:11.0) NSID 1 from core 1: 4858.93 18.98 3292.71 1375.65 8229.31 00:21:43.789 PCIE (0000:00:13.0) NSID 1 from core 1: 4858.93 18.98 3292.94 1296.45 7758.53 00:21:43.789 PCIE (0000:00:12.0) NSID 1 from core 1: 4858.93 18.98 3293.24 1164.65 6534.94 00:21:43.789 PCIE (0000:00:12.0) NSID 2 from core 1: 4858.93 18.98 3293.55 1227.66 7019.23 00:21:43.789 PCIE (0000:00:12.0) NSID 3 from core 1: 4858.93 18.98 3293.82 1382.35 7514.81 00:21:43.789 ======================================================== 00:21:43.789 Total : 29153.57 113.88 3292.80 1164.65 8229.31 00:21:43.789 00:21:44.048 Initializing NVMe Controllers 00:21:44.048 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:44.048 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:21:44.048 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:21:44.048 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:21:44.048 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:21:44.048 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:21:44.048 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:21:44.048 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:21:44.048 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:21:44.048 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:21:44.048 Initialization complete. Launching workers. 00:21:44.048 ======================================================== 00:21:44.048 Latency(us) 00:21:44.048 Device Information : IOPS MiB/s Average min max 00:21:44.048 PCIE (0000:00:10.0) NSID 1 from core 2: 3102.71 12.12 5155.42 1472.83 15123.14 00:21:44.048 PCIE (0000:00:11.0) NSID 1 from core 2: 3102.71 12.12 5156.26 1451.69 12889.37 00:21:44.048 PCIE (0000:00:13.0) NSID 1 from core 2: 3102.71 12.12 5155.77 1220.47 13216.37 00:21:44.048 PCIE (0000:00:12.0) NSID 1 from core 2: 3102.71 12.12 5155.66 1153.29 14470.96 00:21:44.048 PCIE (0000:00:12.0) NSID 2 from core 2: 3102.71 12.12 5156.15 1081.71 15049.25 00:21:44.048 PCIE (0000:00:12.0) NSID 3 from core 2: 3102.71 12.12 5155.97 973.92 15027.11 00:21:44.048 ======================================================== 00:21:44.048 Total : 18616.29 72.72 5155.87 973.92 15123.14 00:21:44.048 00:21:44.048 16:36:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65062 00:21:45.950 Initializing NVMe Controllers 00:21:45.950 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:45.950 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:21:45.950 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:21:45.950 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:21:45.950 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:45.950 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:21:45.950 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:21:45.950 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:21:45.950 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:21:45.950 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:21:45.950 Initialization complete. Launching workers. 00:21:45.950 ======================================================== 00:21:45.950 Latency(us) 00:21:45.950 Device Information : IOPS MiB/s Average min max 00:21:45.950 PCIE (0000:00:10.0) NSID 1 from core 0: 7733.28 30.21 2067.32 921.23 7078.55 00:21:45.950 PCIE (0000:00:11.0) NSID 1 from core 0: 7733.28 30.21 2068.49 937.88 7085.51 00:21:45.950 PCIE (0000:00:13.0) NSID 1 from core 0: 7733.28 30.21 2068.45 910.10 7279.60 00:21:45.950 PCIE (0000:00:12.0) NSID 1 from core 0: 7733.28 30.21 2068.43 901.60 7015.49 00:21:45.950 PCIE (0000:00:12.0) NSID 2 from core 0: 7733.28 30.21 2068.41 887.96 7075.13 00:21:45.950 PCIE (0000:00:12.0) NSID 3 from core 0: 7733.28 30.21 2068.37 836.74 7418.56 00:21:45.950 ======================================================== 00:21:45.950 Total : 46399.65 181.25 2068.25 836.74 7418.56 00:21:45.950 00:21:45.950 16:36:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65063 00:21:45.950 16:36:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65132 00:21:45.950 16:36:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:21:45.950 16:36:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65133 00:21:45.950 16:36:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:21:45.950 16:36:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:21:49.232 Initializing NVMe Controllers 00:21:49.232 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:49.232 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:21:49.232 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:21:49.232 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:21:49.232 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:49.232 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:21:49.232 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:21:49.232 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:21:49.232 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:21:49.232 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:21:49.232 Initialization complete. Launching workers. 00:21:49.232 ======================================================== 00:21:49.232 Latency(us) 00:21:49.232 Device Information : IOPS MiB/s Average min max 00:21:49.232 PCIE (0000:00:10.0) NSID 1 from core 0: 5201.41 20.32 3073.75 938.57 7605.00 00:21:49.232 PCIE (0000:00:11.0) NSID 1 from core 0: 5201.41 20.32 3075.72 977.25 8123.03 00:21:49.232 PCIE (0000:00:13.0) NSID 1 from core 0: 5201.41 20.32 3075.93 963.15 7867.00 00:21:49.232 PCIE (0000:00:12.0) NSID 1 from core 0: 5201.41 20.32 3076.59 971.53 7912.68 00:21:49.232 PCIE (0000:00:12.0) NSID 2 from core 0: 5201.41 20.32 3077.02 970.46 8252.63 00:21:49.232 PCIE (0000:00:12.0) NSID 3 from core 0: 5206.74 20.34 3074.03 968.32 7527.86 00:21:49.232 ======================================================== 00:21:49.232 Total : 31213.78 121.93 3075.51 938.57 8252.63 00:21:49.232 00:21:49.232 Initializing NVMe Controllers 00:21:49.232 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:49.232 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:21:49.232 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:21:49.232 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:21:49.232 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:21:49.232 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:21:49.232 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:21:49.232 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:21:49.232 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:21:49.232 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:21:49.232 Initialization complete. Launching workers. 00:21:49.232 ======================================================== 00:21:49.232 Latency(us) 00:21:49.232 Device Information : IOPS MiB/s Average min max 00:21:49.232 PCIE (0000:00:10.0) NSID 1 from core 1: 4863.43 19.00 3287.19 1093.41 8036.67 00:21:49.232 PCIE (0000:00:11.0) NSID 1 from core 1: 4863.43 19.00 3289.24 1131.40 7871.06 00:21:49.232 PCIE (0000:00:13.0) NSID 1 from core 1: 4863.43 19.00 3289.41 1139.42 8231.06 00:21:49.232 PCIE (0000:00:12.0) NSID 1 from core 1: 4863.43 19.00 3289.73 1142.42 7988.18 00:21:49.232 PCIE (0000:00:12.0) NSID 2 from core 1: 4863.43 19.00 3289.88 1119.12 7945.49 00:21:49.232 PCIE (0000:00:12.0) NSID 3 from core 1: 4863.43 19.00 3289.83 1122.91 7492.19 00:21:49.232 ======================================================== 00:21:49.232 Total : 29180.59 113.99 3289.21 1093.41 8231.06 00:21:49.232 00:21:51.140 Initializing NVMe Controllers 00:21:51.140 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:51.140 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:21:51.140 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:21:51.140 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:21:51.140 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:21:51.140 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:21:51.140 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:21:51.140 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:21:51.140 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:21:51.140 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:21:51.140 Initialization complete. Launching workers. 00:21:51.140 ======================================================== 00:21:51.140 Latency(us) 00:21:51.140 Device Information : IOPS MiB/s Average min max 00:21:51.140 PCIE (0000:00:10.0) NSID 1 from core 2: 3313.51 12.94 4827.13 1107.22 13616.29 00:21:51.140 PCIE (0000:00:11.0) NSID 1 from core 2: 3313.51 12.94 4828.55 1132.59 13915.32 00:21:51.140 PCIE (0000:00:13.0) NSID 1 from core 2: 3313.51 12.94 4827.98 1148.51 13736.86 00:21:51.140 PCIE (0000:00:12.0) NSID 1 from core 2: 3313.51 12.94 4828.13 1199.74 13683.62 00:21:51.140 PCIE (0000:00:12.0) NSID 2 from core 2: 3313.51 12.94 4828.25 1153.31 13648.38 00:21:51.140 PCIE (0000:00:12.0) NSID 3 from core 2: 3313.51 12.94 4828.17 1139.68 12962.75 00:21:51.140 ======================================================== 00:21:51.140 Total : 19881.05 77.66 4828.03 1107.22 13915.32 00:21:51.140 00:21:51.398 ************************************ 00:21:51.399 END TEST nvme_multi_secondary 00:21:51.399 ************************************ 00:21:51.399 16:36:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65132 00:21:51.399 16:36:27 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65133 00:21:51.399 00:21:51.399 real 0m10.861s 00:21:51.399 user 0m18.594s 00:21:51.399 sys 0m1.046s 00:21:51.399 16:36:27 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:51.399 16:36:27 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:21:51.399 16:36:27 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:21:51.399 16:36:27 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:21:51.399 16:36:27 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/64065 ]] 00:21:51.399 16:36:27 nvme -- common/autotest_common.sh@1090 -- # kill 64065 00:21:51.399 16:36:27 nvme -- common/autotest_common.sh@1091 -- # wait 64065 00:21:51.399 [2024-10-17 16:36:27.523990] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.524274] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.524326] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.524358] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.526994] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.527071] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.527097] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.527124] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.529677] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.529760] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.529788] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.529815] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.532404] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.532482] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.532512] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.399 [2024-10-17 16:36:27.532540] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65005) is not found. Dropping the request. 00:21:51.657 [2024-10-17 16:36:27.760428] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:21:51.657 16:36:27 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:21:51.657 16:36:27 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:21:51.657 16:36:27 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:21:51.657 16:36:27 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:51.657 16:36:27 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:51.657 16:36:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:51.657 ************************************ 00:21:51.657 START TEST bdev_nvme_reset_stuck_adm_cmd 00:21:51.657 ************************************ 00:21:51.657 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:21:51.657 * Looking for test storage... 00:21:51.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:21:51.657 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:51.657 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:21:51.657 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:21:51.917 16:36:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.917 --rc genhtml_branch_coverage=1 00:21:51.917 --rc genhtml_function_coverage=1 00:21:51.917 --rc genhtml_legend=1 00:21:51.917 --rc geninfo_all_blocks=1 00:21:51.917 --rc geninfo_unexecuted_blocks=1 00:21:51.917 00:21:51.917 ' 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.917 --rc genhtml_branch_coverage=1 00:21:51.917 --rc genhtml_function_coverage=1 00:21:51.917 --rc genhtml_legend=1 00:21:51.917 --rc geninfo_all_blocks=1 00:21:51.917 --rc geninfo_unexecuted_blocks=1 00:21:51.917 00:21:51.917 ' 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.917 --rc genhtml_branch_coverage=1 00:21:51.917 --rc genhtml_function_coverage=1 00:21:51.917 --rc genhtml_legend=1 00:21:51.917 --rc geninfo_all_blocks=1 00:21:51.917 --rc geninfo_unexecuted_blocks=1 00:21:51.917 00:21:51.917 ' 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.917 --rc genhtml_branch_coverage=1 00:21:51.917 --rc genhtml_function_coverage=1 00:21:51.917 --rc genhtml_legend=1 00:21:51.917 --rc geninfo_all_blocks=1 00:21:51.917 --rc geninfo_unexecuted_blocks=1 00:21:51.917 00:21:51.917 ' 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:21:51.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65296 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65296 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 65296 ']' 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:51.917 16:36:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:52.176 [2024-10-17 16:36:28.234158] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:21:52.176 [2024-10-17 16:36:28.234328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65296 ] 00:21:52.176 [2024-10-17 16:36:28.445963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:52.435 [2024-10-17 16:36:28.586910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.435 [2024-10-17 16:36:28.587044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.435 [2024-10-17 16:36:28.587068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.435 [2024-10-17 16:36:28.587072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:53.371 nvme0n1 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_jFG9b.txt 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:53.371 true 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1729182989 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65325 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:53.371 16:36:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:55.907 [2024-10-17 16:36:31.596329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:21:55.907 [2024-10-17 16:36:31.596793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:55.907 [2024-10-17 16:36:31.596830] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:21:55.907 [2024-10-17 16:36:31.596847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:55.907 [2024-10-17 16:36:31.598662] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:55.907 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65325 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65325 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65325 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_jFG9b.txt 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_jFG9b.txt 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65296 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 65296 ']' 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 65296 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65296 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:55.907 killing process with pid 65296 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65296' 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 65296 00:21:55.907 16:36:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 65296 00:21:58.444 16:36:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:21:58.444 16:36:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:21:58.444 00:21:58.444 real 0m6.386s 00:21:58.444 user 0m22.204s 00:21:58.444 sys 0m0.833s 00:21:58.444 16:36:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:58.444 ************************************ 00:21:58.444 END TEST bdev_nvme_reset_stuck_adm_cmd 00:21:58.444 16:36:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:21:58.444 ************************************ 00:21:58.444 16:36:34 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:21:58.444 16:36:34 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:21:58.444 16:36:34 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:58.444 16:36:34 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:58.444 16:36:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:21:58.444 ************************************ 00:21:58.444 START TEST nvme_fio 00:21:58.444 ************************************ 00:21:58.444 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:21:58.444 16:36:34 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:58.444 16:36:34 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:21:58.444 16:36:34 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:21:58.444 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:21:58.444 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:21:58.444 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:58.444 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:21:58.444 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:58.444 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:21:58.444 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:21:58.444 16:36:34 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:21:58.444 16:36:34 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:21:58.444 16:36:34 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:21:58.444 16:36:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:58.444 16:36:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:21:58.444 16:36:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:58.444 16:36:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:21:58.703 16:36:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:21:58.703 16:36:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:58.703 16:36:34 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:21:59.110 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:59.110 fio-3.35 00:21:59.110 Starting 1 thread 00:22:02.442 00:22:02.442 test: (groupid=0, jobs=1): err= 0: pid=65480: Thu Oct 17 16:36:38 2024 00:22:02.442 read: IOPS=21.8k, BW=85.2MiB/s (89.4MB/s)(171MiB/2001msec) 00:22:02.442 slat (nsec): min=3799, max=67802, avg=4798.56, stdev=1372.43 00:22:02.442 clat (usec): min=223, max=10604, avg=2927.34, stdev=507.30 00:22:02.442 lat (usec): min=228, max=10672, avg=2932.13, stdev=507.96 00:22:02.442 clat percentiles (usec): 00:22:02.442 | 1.00th=[ 1975], 5.00th=[ 2606], 10.00th=[ 2704], 20.00th=[ 2769], 00:22:02.442 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:22:02.442 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3097], 95.00th=[ 3326], 00:22:02.442 | 99.00th=[ 5276], 99.50th=[ 6390], 99.90th=[ 8455], 99.95th=[ 8848], 00:22:02.442 | 99.99th=[10421] 00:22:02.442 bw ( KiB/s): min=84056, max=86944, per=98.25%, avg=85761.00, stdev=1513.11, samples=3 00:22:02.442 iops : min=21014, max=21736, avg=21440.00, stdev=378.15, samples=3 00:22:02.442 write: IOPS=21.7k, BW=84.6MiB/s (88.8MB/s)(169MiB/2001msec); 0 zone resets 00:22:02.442 slat (nsec): min=3998, max=99168, avg=4960.87, stdev=1416.82 00:22:02.442 clat (usec): min=197, max=10522, avg=2931.67, stdev=513.07 00:22:02.442 lat (usec): min=202, max=10535, avg=2936.63, stdev=513.72 00:22:02.442 clat percentiles (usec): 00:22:02.442 | 1.00th=[ 1958], 5.00th=[ 2606], 10.00th=[ 2704], 20.00th=[ 2769], 00:22:02.442 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:22:02.442 | 70.00th=[ 2933], 80.00th=[ 2999], 90.00th=[ 3097], 95.00th=[ 3359], 00:22:02.442 | 99.00th=[ 5342], 99.50th=[ 6521], 99.90th=[ 8455], 99.95th=[ 8979], 00:22:02.442 | 99.99th=[10028] 00:22:02.442 bw ( KiB/s): min=84000, max=87560, per=99.15%, avg=85945.00, stdev=1802.80, samples=3 00:22:02.442 iops : min=21000, max=21890, avg=21486.00, stdev=450.63, samples=3 00:22:02.442 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:22:02.442 lat (msec) : 2=1.07%, 4=96.75%, 10=2.12%, 20=0.01% 00:22:02.442 cpu : usr=99.25%, sys=0.15%, ctx=5, majf=0, minf=608 00:22:02.442 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:02.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:02.442 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:02.442 issued rwts: total=43665,43361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:02.443 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:02.443 00:22:02.443 Run status group 0 (all jobs): 00:22:02.443 READ: bw=85.2MiB/s (89.4MB/s), 85.2MiB/s-85.2MiB/s (89.4MB/s-89.4MB/s), io=171MiB (179MB), run=2001-2001msec 00:22:02.443 WRITE: bw=84.6MiB/s (88.8MB/s), 84.6MiB/s-84.6MiB/s (88.8MB/s-88.8MB/s), io=169MiB (178MB), run=2001-2001msec 00:22:02.702 ----------------------------------------------------- 00:22:02.702 Suppressions used: 00:22:02.702 count bytes template 00:22:02.702 1 32 /usr/src/fio/parse.c 00:22:02.702 1 8 libtcmalloc_minimal.so 00:22:02.702 ----------------------------------------------------- 00:22:02.702 00:22:02.702 16:36:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:22:02.702 16:36:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:22:02.702 16:36:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:22:02.702 16:36:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:22:02.961 16:36:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:22:02.961 16:36:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:22:03.221 16:36:39 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:22:03.221 16:36:39 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:03.221 16:36:39 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:22:03.481 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:03.481 fio-3.35 00:22:03.481 Starting 1 thread 00:22:07.668 00:22:07.668 test: (groupid=0, jobs=1): err= 0: pid=65551: Thu Oct 17 16:36:43 2024 00:22:07.668 read: IOPS=21.9k, BW=85.7MiB/s (89.9MB/s)(171MiB/2001msec) 00:22:07.668 slat (nsec): min=3818, max=82838, avg=4840.01, stdev=1246.48 00:22:07.668 clat (usec): min=199, max=11907, avg=2913.53, stdev=377.66 00:22:07.668 lat (usec): min=203, max=11981, avg=2918.37, stdev=378.06 00:22:07.668 clat percentiles (usec): 00:22:07.668 | 1.00th=[ 2147], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2802], 00:22:07.669 | 30.00th=[ 2835], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:22:07.669 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3392], 00:22:07.669 | 99.00th=[ 4113], 99.50th=[ 4883], 99.90th=[ 7832], 99.95th=[10028], 00:22:07.669 | 99.99th=[11731] 00:22:07.669 bw ( KiB/s): min=83944, max=88448, per=98.49%, avg=86432.67, stdev=2289.00, samples=3 00:22:07.669 iops : min=20986, max=22112, avg=21608.00, stdev=572.20, samples=3 00:22:07.669 write: IOPS=21.8k, BW=85.1MiB/s (89.3MB/s)(170MiB/2001msec); 0 zone resets 00:22:07.669 slat (nsec): min=3884, max=42353, avg=4990.32, stdev=1243.82 00:22:07.669 clat (usec): min=234, max=11742, avg=2914.09, stdev=381.32 00:22:07.669 lat (usec): min=238, max=11756, avg=2919.08, stdev=381.68 00:22:07.669 clat percentiles (usec): 00:22:07.669 | 1.00th=[ 2147], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2802], 00:22:07.669 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2868], 60.00th=[ 2900], 00:22:07.669 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3392], 00:22:07.669 | 99.00th=[ 4113], 99.50th=[ 4817], 99.90th=[ 8160], 99.95th=[10159], 00:22:07.669 | 99.99th=[11469] 00:22:07.669 bw ( KiB/s): min=83896, max=89304, per=99.40%, avg=86640.67, stdev=2704.92, samples=3 00:22:07.669 iops : min=20974, max=22326, avg=21660.00, stdev=676.22, samples=3 00:22:07.669 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:22:07.669 lat (msec) : 2=0.61%, 4=98.21%, 10=1.10%, 20=0.05% 00:22:07.669 cpu : usr=99.30%, sys=0.10%, ctx=4, majf=0, minf=608 00:22:07.669 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:07.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:07.669 issued rwts: total=43899,43602,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.669 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:07.669 00:22:07.669 Run status group 0 (all jobs): 00:22:07.669 READ: bw=85.7MiB/s (89.9MB/s), 85.7MiB/s-85.7MiB/s (89.9MB/s-89.9MB/s), io=171MiB (180MB), run=2001-2001msec 00:22:07.669 WRITE: bw=85.1MiB/s (89.3MB/s), 85.1MiB/s-85.1MiB/s (89.3MB/s-89.3MB/s), io=170MiB (179MB), run=2001-2001msec 00:22:07.669 ----------------------------------------------------- 00:22:07.669 Suppressions used: 00:22:07.669 count bytes template 00:22:07.669 1 32 /usr/src/fio/parse.c 00:22:07.669 1 8 libtcmalloc_minimal.so 00:22:07.669 ----------------------------------------------------- 00:22:07.669 00:22:07.669 16:36:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:22:07.669 16:36:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:22:07.669 16:36:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:22:07.669 16:36:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:22:07.669 16:36:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:22:07.669 16:36:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:22:07.928 16:36:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:22:07.928 16:36:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:07.928 16:36:44 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:22:08.187 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:08.187 fio-3.35 00:22:08.187 Starting 1 thread 00:22:13.480 00:22:13.480 test: (groupid=0, jobs=1): err= 0: pid=65613: Thu Oct 17 16:36:49 2024 00:22:13.480 read: IOPS=18.7k, BW=73.1MiB/s (76.7MB/s)(146MiB/2001msec) 00:22:13.480 slat (usec): min=3, max=398, avg= 5.77, stdev= 3.11 00:22:13.480 clat (usec): min=199, max=17535, avg=3366.12, stdev=987.88 00:22:13.480 lat (usec): min=204, max=17540, avg=3371.89, stdev=988.95 00:22:13.480 clat percentiles (usec): 00:22:13.480 | 1.00th=[ 2278], 5.00th=[ 2835], 10.00th=[ 2933], 20.00th=[ 2999], 00:22:13.480 | 30.00th=[ 3064], 40.00th=[ 3097], 50.00th=[ 3163], 60.00th=[ 3195], 00:22:13.480 | 70.00th=[ 3261], 80.00th=[ 3326], 90.00th=[ 3720], 95.00th=[ 4883], 00:22:13.480 | 99.00th=[ 8455], 99.50th=[ 8848], 99.90th=[11076], 99.95th=[16057], 00:22:13.480 | 99.99th=[17433] 00:22:13.480 bw ( KiB/s): min=69096, max=79304, per=99.97%, avg=74861.33, stdev=5230.96, samples=3 00:22:13.480 iops : min=17274, max=19826, avg=18715.33, stdev=1307.74, samples=3 00:22:13.480 write: IOPS=18.7k, BW=73.1MiB/s (76.7MB/s)(146MiB/2001msec); 0 zone resets 00:22:13.480 slat (nsec): min=3937, max=68968, avg=6194.55, stdev=2398.51 00:22:13.480 clat (usec): min=261, max=24347, avg=3437.57, stdev=1359.96 00:22:13.480 lat (usec): min=267, max=24353, avg=3443.77, stdev=1360.75 00:22:13.480 clat percentiles (usec): 00:22:13.480 | 1.00th=[ 2474], 5.00th=[ 2868], 10.00th=[ 2933], 20.00th=[ 3032], 00:22:13.480 | 30.00th=[ 3064], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3195], 00:22:13.480 | 70.00th=[ 3261], 80.00th=[ 3359], 90.00th=[ 3785], 95.00th=[ 5211], 00:22:13.480 | 99.00th=[ 8717], 99.50th=[10683], 99.90th=[22938], 99.95th=[23462], 00:22:13.480 | 99.99th=[24249] 00:22:13.480 bw ( KiB/s): min=69040, max=79448, per=99.99%, avg=74882.67, stdev=5320.27, samples=3 00:22:13.480 iops : min=17260, max=19862, avg=18720.67, stdev=1330.07, samples=3 00:22:13.480 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:22:13.480 lat (msec) : 2=0.46%, 4=91.35%, 10=7.78%, 20=0.29%, 50=0.07% 00:22:13.480 cpu : usr=99.00%, sys=0.20%, ctx=5, majf=0, minf=608 00:22:13.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:13.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:13.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:13.480 issued rwts: total=37459,37464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:13.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:13.480 00:22:13.480 Run status group 0 (all jobs): 00:22:13.480 READ: bw=73.1MiB/s (76.7MB/s), 73.1MiB/s-73.1MiB/s (76.7MB/s-76.7MB/s), io=146MiB (153MB), run=2001-2001msec 00:22:13.480 WRITE: bw=73.1MiB/s (76.7MB/s), 73.1MiB/s-73.1MiB/s (76.7MB/s-76.7MB/s), io=146MiB (153MB), run=2001-2001msec 00:22:13.480 ----------------------------------------------------- 00:22:13.480 Suppressions used: 00:22:13.480 count bytes template 00:22:13.480 1 32 /usr/src/fio/parse.c 00:22:13.480 1 8 libtcmalloc_minimal.so 00:22:13.480 ----------------------------------------------------- 00:22:13.480 00:22:13.480 16:36:49 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:22:13.480 16:36:49 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:22:13.480 16:36:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:22:13.480 16:36:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:22:13.739 16:36:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:22:13.739 16:36:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:22:13.998 16:36:50 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:22:13.998 16:36:50 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:22:13.998 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:22:13.998 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:13.998 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:13.998 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:13.999 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:13.999 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:22:13.999 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:13.999 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:13.999 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:13.999 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:22:13.999 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:13.999 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:13.999 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:13.999 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:22:13.999 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:13.999 16:36:50 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:22:13.999 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:13.999 fio-3.35 00:22:13.999 Starting 1 thread 00:22:20.566 00:22:20.566 test: (groupid=0, jobs=1): err= 0: pid=65680: Thu Oct 17 16:36:56 2024 00:22:20.566 read: IOPS=21.5k, BW=84.0MiB/s (88.1MB/s)(168MiB/2001msec) 00:22:20.566 slat (usec): min=3, max=107, avg= 4.88, stdev= 1.64 00:22:20.566 clat (usec): min=203, max=11246, avg=2972.16, stdev=623.24 00:22:20.566 lat (usec): min=208, max=11321, avg=2977.03, stdev=624.09 00:22:20.566 clat percentiles (usec): 00:22:20.566 | 1.00th=[ 2278], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:22:20.566 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:22:20.566 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3130], 95.00th=[ 3654], 00:22:20.566 | 99.00th=[ 6587], 99.50th=[ 7177], 99.90th=[ 8291], 99.95th=[ 8979], 00:22:20.566 | 99.99th=[10945] 00:22:20.566 bw ( KiB/s): min=85784, max=89392, per=100.00%, avg=87133.33, stdev=1968.40, samples=3 00:22:20.566 iops : min=21446, max=22348, avg=21783.33, stdev=492.10, samples=3 00:22:20.566 write: IOPS=21.4k, BW=83.4MiB/s (87.5MB/s)(167MiB/2001msec); 0 zone resets 00:22:20.566 slat (usec): min=3, max=386, avg= 5.06, stdev= 2.60 00:22:20.566 clat (usec): min=356, max=11070, avg=2972.08, stdev=608.81 00:22:20.566 lat (usec): min=362, max=11084, avg=2977.14, stdev=609.69 00:22:20.566 clat percentiles (usec): 00:22:20.566 | 1.00th=[ 2245], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:22:20.566 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:22:20.566 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3097], 95.00th=[ 3621], 00:22:20.566 | 99.00th=[ 6521], 99.50th=[ 7177], 99.90th=[ 8291], 99.95th=[ 9241], 00:22:20.566 | 99.99th=[10683] 00:22:20.566 bw ( KiB/s): min=85544, max=89320, per=100.00%, avg=87274.67, stdev=1907.57, samples=3 00:22:20.566 iops : min=21386, max=22330, avg=21818.67, stdev=476.89, samples=3 00:22:20.566 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:22:20.566 lat (msec) : 2=0.37%, 4=96.02%, 10=3.55%, 20=0.03% 00:22:20.566 cpu : usr=98.95%, sys=0.25%, ctx=19, majf=0, minf=606 00:22:20.566 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:20.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.566 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:20.566 issued rwts: total=43047,42726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.566 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:20.566 00:22:20.566 Run status group 0 (all jobs): 00:22:20.566 READ: bw=84.0MiB/s (88.1MB/s), 84.0MiB/s-84.0MiB/s (88.1MB/s-88.1MB/s), io=168MiB (176MB), run=2001-2001msec 00:22:20.566 WRITE: bw=83.4MiB/s (87.5MB/s), 83.4MiB/s-83.4MiB/s (87.5MB/s-87.5MB/s), io=167MiB (175MB), run=2001-2001msec 00:22:20.566 ----------------------------------------------------- 00:22:20.566 Suppressions used: 00:22:20.566 count bytes template 00:22:20.566 1 32 /usr/src/fio/parse.c 00:22:20.566 1 8 libtcmalloc_minimal.so 00:22:20.566 ----------------------------------------------------- 00:22:20.566 00:22:20.566 ************************************ 00:22:20.566 END TEST nvme_fio 00:22:20.566 ************************************ 00:22:20.566 16:36:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:22:20.566 16:36:56 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:22:20.566 00:22:20.566 real 0m22.337s 00:22:20.566 user 0m15.313s 00:22:20.566 sys 0m10.640s 00:22:20.566 16:36:56 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.566 16:36:56 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:22:20.566 ************************************ 00:22:20.566 END TEST nvme 00:22:20.566 ************************************ 00:22:20.566 00:22:20.566 real 1m37.807s 00:22:20.566 user 3m43.958s 00:22:20.566 sys 0m30.238s 00:22:20.566 16:36:56 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.566 16:36:56 nvme -- common/autotest_common.sh@10 -- # set +x 00:22:20.566 16:36:56 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:22:20.566 16:36:56 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:22:20.566 16:36:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:20.566 16:36:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:20.566 16:36:56 -- common/autotest_common.sh@10 -- # set +x 00:22:20.566 ************************************ 00:22:20.566 START TEST nvme_scc 00:22:20.566 ************************************ 00:22:20.566 16:36:56 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:22:20.566 * Looking for test storage... 00:22:20.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:22:20.566 16:36:56 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:20.567 16:36:56 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:22:20.567 16:36:56 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:20.826 16:36:56 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@345 -- # : 1 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:22:20.826 16:36:56 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@368 -- # return 0 00:22:20.827 16:36:56 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.827 16:36:56 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:20.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.827 --rc genhtml_branch_coverage=1 00:22:20.827 --rc genhtml_function_coverage=1 00:22:20.827 --rc genhtml_legend=1 00:22:20.827 --rc geninfo_all_blocks=1 00:22:20.827 --rc geninfo_unexecuted_blocks=1 00:22:20.827 00:22:20.827 ' 00:22:20.827 16:36:56 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:20.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.827 --rc genhtml_branch_coverage=1 00:22:20.827 --rc genhtml_function_coverage=1 00:22:20.827 --rc genhtml_legend=1 00:22:20.827 --rc geninfo_all_blocks=1 00:22:20.827 --rc geninfo_unexecuted_blocks=1 00:22:20.827 00:22:20.827 ' 00:22:20.827 16:36:56 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:20.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.827 --rc genhtml_branch_coverage=1 00:22:20.827 --rc genhtml_function_coverage=1 00:22:20.827 --rc genhtml_legend=1 00:22:20.827 --rc geninfo_all_blocks=1 00:22:20.827 --rc geninfo_unexecuted_blocks=1 00:22:20.827 00:22:20.827 ' 00:22:20.827 16:36:56 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:20.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.827 --rc genhtml_branch_coverage=1 00:22:20.827 --rc genhtml_function_coverage=1 00:22:20.827 --rc genhtml_legend=1 00:22:20.827 --rc geninfo_all_blocks=1 00:22:20.827 --rc geninfo_unexecuted_blocks=1 00:22:20.827 00:22:20.827 ' 00:22:20.827 16:36:56 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:22:20.827 16:36:56 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:22:20.827 16:36:56 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:22:20.827 16:36:56 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:20.827 16:36:56 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.827 16:36:56 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.827 16:36:56 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.827 16:36:56 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.827 16:36:56 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.827 16:36:56 nvme_scc -- paths/export.sh@5 -- # export PATH 00:22:20.827 16:36:56 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.827 16:36:56 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:22:20.827 16:36:56 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:22:20.827 16:36:56 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:22:20.827 16:36:56 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:22:20.827 16:36:56 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:22:20.827 16:36:56 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:22:20.827 16:36:56 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:22:20.827 16:36:56 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:22:20.827 16:36:56 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:22:20.827 16:36:56 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:20.827 16:36:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:22:20.827 16:36:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:22:20.827 16:36:56 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:22:20.827 16:36:56 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:21.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:21.664 Waiting for block devices as requested 00:22:21.664 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:21.664 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:21.664 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:22:21.932 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:22:27.237 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:22:27.237 16:37:03 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:22:27.237 16:37:03 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:22:27.237 16:37:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:22:27.237 16:37:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:22:27.237 16:37:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:22:27.237 16:37:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:22:27.237 16:37:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:22:27.237 16:37:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:22:27.237 16:37:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:27.238 16:37:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.238 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:22:27.239 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.240 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.241 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:22:27.242 16:37:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:22:27.242 16:37:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:22:27.242 16:37:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:27.242 16:37:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.242 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.243 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.244 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.245 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:22:27.246 16:37:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:22:27.246 16:37:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:22:27.246 16:37:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:27.246 16:37:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:22:27.246 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.247 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:22:27.248 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.249 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.521 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.522 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:22:27.523 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.524 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:22:27.525 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:22:27.526 16:37:03 nvme_scc -- scripts/common.sh@18 -- # local i 00:22:27.526 16:37:03 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:22:27.526 16:37:03 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:27.526 16:37:03 nvme_scc -- scripts/common.sh@27 -- # return 0 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@18 -- # shift 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:22:27.526 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.527 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:22:27.528 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:22:27.529 16:37:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:22:27.529 16:37:03 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:22:27.529 16:37:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:22:27.529 16:37:03 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:22:27.529 16:37:03 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:28.466 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:29.034 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:29.034 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:22:29.034 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:29.034 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:22:29.293 16:37:05 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:22:29.293 16:37:05 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:29.293 16:37:05 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:29.293 16:37:05 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:22:29.293 ************************************ 00:22:29.293 START TEST nvme_simple_copy 00:22:29.293 ************************************ 00:22:29.293 16:37:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:22:29.551 Initializing NVMe Controllers 00:22:29.551 Attaching to 0000:00:10.0 00:22:29.551 Controller supports SCC. Attached to 0000:00:10.0 00:22:29.551 Namespace ID: 1 size: 6GB 00:22:29.551 Initialization complete. 00:22:29.551 00:22:29.551 Controller QEMU NVMe Ctrl (12340 ) 00:22:29.551 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:22:29.551 Namespace Block Size:4096 00:22:29.551 Writing LBAs 0 to 63 with Random Data 00:22:29.551 Copied LBAs from 0 - 63 to the Destination LBA 256 00:22:29.551 LBAs matching Written Data: 64 00:22:29.551 00:22:29.551 real 0m0.317s 00:22:29.551 user 0m0.112s 00:22:29.551 sys 0m0.103s 00:22:29.551 ************************************ 00:22:29.551 END TEST nvme_simple_copy 00:22:29.551 ************************************ 00:22:29.551 16:37:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:29.551 16:37:05 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:22:29.551 ************************************ 00:22:29.551 END TEST nvme_scc 00:22:29.551 ************************************ 00:22:29.551 00:22:29.551 real 0m9.080s 00:22:29.551 user 0m1.642s 00:22:29.551 sys 0m2.376s 00:22:29.551 16:37:05 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:29.551 16:37:05 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:22:29.811 16:37:05 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:22:29.811 16:37:05 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:22:29.811 16:37:05 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:22:29.811 16:37:05 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:22:29.811 16:37:05 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:22:29.811 16:37:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:29.811 16:37:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:29.811 16:37:05 -- common/autotest_common.sh@10 -- # set +x 00:22:29.811 ************************************ 00:22:29.811 START TEST nvme_fdp 00:22:29.811 ************************************ 00:22:29.811 16:37:05 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:22:29.811 * Looking for test storage... 00:22:29.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:22:29.811 16:37:05 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:29.811 16:37:06 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:22:29.811 16:37:06 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:29.811 16:37:06 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:22:29.811 16:37:06 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:22:30.071 16:37:06 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:30.071 16:37:06 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:30.071 16:37:06 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:22:30.071 16:37:06 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:30.071 16:37:06 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:30.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.071 --rc genhtml_branch_coverage=1 00:22:30.071 --rc genhtml_function_coverage=1 00:22:30.071 --rc genhtml_legend=1 00:22:30.071 --rc geninfo_all_blocks=1 00:22:30.071 --rc geninfo_unexecuted_blocks=1 00:22:30.071 00:22:30.071 ' 00:22:30.071 16:37:06 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:30.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.071 --rc genhtml_branch_coverage=1 00:22:30.071 --rc genhtml_function_coverage=1 00:22:30.071 --rc genhtml_legend=1 00:22:30.071 --rc geninfo_all_blocks=1 00:22:30.071 --rc geninfo_unexecuted_blocks=1 00:22:30.071 00:22:30.071 ' 00:22:30.071 16:37:06 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:30.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.071 --rc genhtml_branch_coverage=1 00:22:30.071 --rc genhtml_function_coverage=1 00:22:30.071 --rc genhtml_legend=1 00:22:30.071 --rc geninfo_all_blocks=1 00:22:30.071 --rc geninfo_unexecuted_blocks=1 00:22:30.071 00:22:30.071 ' 00:22:30.071 16:37:06 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:30.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.071 --rc genhtml_branch_coverage=1 00:22:30.071 --rc genhtml_function_coverage=1 00:22:30.071 --rc genhtml_legend=1 00:22:30.071 --rc geninfo_all_blocks=1 00:22:30.071 --rc geninfo_unexecuted_blocks=1 00:22:30.071 00:22:30.071 ' 00:22:30.071 16:37:06 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:22:30.071 16:37:06 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:22:30.071 16:37:06 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:22:30.071 16:37:06 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:30.071 16:37:06 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:30.071 16:37:06 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:22:30.071 16:37:06 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.071 16:37:06 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.071 16:37:06 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.071 16:37:06 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.071 16:37:06 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.071 16:37:06 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.071 16:37:06 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:22:30.071 16:37:06 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.071 16:37:06 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:22:30.071 16:37:06 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:22:30.071 16:37:06 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:22:30.071 16:37:06 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:22:30.071 16:37:06 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:22:30.071 16:37:06 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:22:30.071 16:37:06 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:22:30.071 16:37:06 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:22:30.071 16:37:06 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:22:30.071 16:37:06 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:30.071 16:37:06 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:30.640 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:30.898 Waiting for block devices as requested 00:22:30.898 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:30.898 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:31.155 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:22:31.156 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:22:36.435 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:22:36.435 16:37:12 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:22:36.435 16:37:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:22:36.435 16:37:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:22:36.435 16:37:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:36.435 16:37:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.435 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.436 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:22:36.437 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.438 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:22:36.439 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:22:36.440 16:37:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:22:36.440 16:37:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:22:36.440 16:37:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:36.440 16:37:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:22:36.440 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:22:36.441 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.442 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:22:36.443 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:22:36.444 16:37:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:22:36.444 16:37:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:22:36.444 16:37:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:36.444 16:37:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.444 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:22:36.445 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:22:36.446 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.447 16:37:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:22:36.709 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.710 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.711 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.712 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.713 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:22:36.714 16:37:12 nvme_fdp -- scripts/common.sh@18 -- # local i 00:22:36.714 16:37:12 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:22:36.714 16:37:12 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:36.714 16:37:12 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:22:36.714 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.715 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:22:36.716 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:22:36.717 16:37:12 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:22:36.717 16:37:12 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:22:36.717 16:37:12 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:22:36.717 16:37:12 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:22:36.717 16:37:12 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:37.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:38.220 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:38.220 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:22:38.220 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:38.220 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:22:38.478 16:37:14 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:22:38.478 16:37:14 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:38.479 16:37:14 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:38.479 16:37:14 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:22:38.479 ************************************ 00:22:38.479 START TEST nvme_flexible_data_placement 00:22:38.479 ************************************ 00:22:38.479 16:37:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:22:38.737 Initializing NVMe Controllers 00:22:38.737 Attaching to 0000:00:13.0 00:22:38.737 Controller supports FDP Attached to 0000:00:13.0 00:22:38.737 Namespace ID: 1 Endurance Group ID: 1 00:22:38.737 Initialization complete. 00:22:38.737 00:22:38.737 ================================== 00:22:38.737 == FDP tests for Namespace: #01 == 00:22:38.737 ================================== 00:22:38.737 00:22:38.737 Get Feature: FDP: 00:22:38.737 ================= 00:22:38.737 Enabled: Yes 00:22:38.737 FDP configuration Index: 0 00:22:38.737 00:22:38.737 FDP configurations log page 00:22:38.737 =========================== 00:22:38.737 Number of FDP configurations: 1 00:22:38.737 Version: 0 00:22:38.737 Size: 112 00:22:38.737 FDP Configuration Descriptor: 0 00:22:38.737 Descriptor Size: 96 00:22:38.737 Reclaim Group Identifier format: 2 00:22:38.737 FDP Volatile Write Cache: Not Present 00:22:38.737 FDP Configuration: Valid 00:22:38.737 Vendor Specific Size: 0 00:22:38.737 Number of Reclaim Groups: 2 00:22:38.737 Number of Recalim Unit Handles: 8 00:22:38.737 Max Placement Identifiers: 128 00:22:38.737 Number of Namespaces Suppprted: 256 00:22:38.737 Reclaim unit Nominal Size: 6000000 bytes 00:22:38.737 Estimated Reclaim Unit Time Limit: Not Reported 00:22:38.737 RUH Desc #000: RUH Type: Initially Isolated 00:22:38.737 RUH Desc #001: RUH Type: Initially Isolated 00:22:38.737 RUH Desc #002: RUH Type: Initially Isolated 00:22:38.737 RUH Desc #003: RUH Type: Initially Isolated 00:22:38.737 RUH Desc #004: RUH Type: Initially Isolated 00:22:38.737 RUH Desc #005: RUH Type: Initially Isolated 00:22:38.737 RUH Desc #006: RUH Type: Initially Isolated 00:22:38.737 RUH Desc #007: RUH Type: Initially Isolated 00:22:38.737 00:22:38.737 FDP reclaim unit handle usage log page 00:22:38.737 ====================================== 00:22:38.737 Number of Reclaim Unit Handles: 8 00:22:38.737 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:22:38.737 RUH Usage Desc #001: RUH Attributes: Unused 00:22:38.737 RUH Usage Desc #002: RUH Attributes: Unused 00:22:38.737 RUH Usage Desc #003: RUH Attributes: Unused 00:22:38.737 RUH Usage Desc #004: RUH Attributes: Unused 00:22:38.737 RUH Usage Desc #005: RUH Attributes: Unused 00:22:38.737 RUH Usage Desc #006: RUH Attributes: Unused 00:22:38.737 RUH Usage Desc #007: RUH Attributes: Unused 00:22:38.737 00:22:38.737 FDP statistics log page 00:22:38.737 ======================= 00:22:38.737 Host bytes with metadata written: 984354816 00:22:38.737 Media bytes with metadata written: 984489984 00:22:38.737 Media bytes erased: 0 00:22:38.737 00:22:38.737 FDP Reclaim unit handle status 00:22:38.737 ============================== 00:22:38.737 Number of RUHS descriptors: 2 00:22:38.737 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000153f 00:22:38.737 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:22:38.737 00:22:38.737 FDP write on placement id: 0 success 00:22:38.737 00:22:38.737 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:22:38.737 00:22:38.737 IO mgmt send: RUH update for Placement ID: #0 Success 00:22:38.737 00:22:38.737 Get Feature: FDP Events for Placement handle: #0 00:22:38.737 ======================== 00:22:38.737 Number of FDP Events: 6 00:22:38.738 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:22:38.738 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:22:38.738 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:22:38.738 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:22:38.738 FDP Event: #4 Type: Media Reallocated Enabled: No 00:22:38.738 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:22:38.738 00:22:38.738 FDP events log page 00:22:38.738 =================== 00:22:38.738 Number of FDP events: 1 00:22:38.738 FDP Event #0: 00:22:38.738 Event Type: RU Not Written to Capacity 00:22:38.738 Placement Identifier: Valid 00:22:38.738 NSID: Valid 00:22:38.738 Location: Valid 00:22:38.738 Placement Identifier: 0 00:22:38.738 Event Timestamp: 8 00:22:38.738 Namespace Identifier: 1 00:22:38.738 Reclaim Group Identifier: 0 00:22:38.738 Reclaim Unit Handle Identifier: 0 00:22:38.738 00:22:38.738 FDP test passed 00:22:38.738 00:22:38.738 real 0m0.300s 00:22:38.738 user 0m0.102s 00:22:38.738 sys 0m0.097s 00:22:38.738 16:37:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:38.738 ************************************ 00:22:38.738 16:37:14 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:22:38.738 END TEST nvme_flexible_data_placement 00:22:38.738 ************************************ 00:22:38.738 00:22:38.738 real 0m9.119s 00:22:38.738 user 0m1.591s 00:22:38.738 sys 0m2.594s 00:22:38.738 16:37:14 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:38.738 16:37:14 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:22:38.738 ************************************ 00:22:38.738 END TEST nvme_fdp 00:22:38.738 ************************************ 00:22:38.996 16:37:15 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:22:38.996 16:37:15 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:22:38.996 16:37:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:38.996 16:37:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:38.996 16:37:15 -- common/autotest_common.sh@10 -- # set +x 00:22:38.996 ************************************ 00:22:38.996 START TEST nvme_rpc 00:22:38.996 ************************************ 00:22:38.996 16:37:15 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:22:38.996 * Looking for test storage... 00:22:38.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:22:38.996 16:37:15 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:38.996 16:37:15 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:22:38.996 16:37:15 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:38.996 16:37:15 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:38.996 16:37:15 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:38.996 16:37:15 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:38.996 16:37:15 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:38.996 16:37:15 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.996 16:37:15 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:22:38.996 16:37:15 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:22:38.996 16:37:15 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:22:38.996 16:37:15 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:22:38.996 16:37:15 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:22:38.996 16:37:15 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:22:38.996 16:37:15 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:38.996 16:37:15 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:22:38.996 16:37:15 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:22:38.996 16:37:15 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.255 16:37:15 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.255 16:37:15 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:22:39.255 16:37:15 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:22:39.255 16:37:15 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.255 16:37:15 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:22:39.255 16:37:15 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.255 16:37:15 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:22:39.255 16:37:15 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:22:39.255 16:37:15 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.255 16:37:15 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:22:39.255 16:37:15 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.255 16:37:15 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.255 16:37:15 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.255 16:37:15 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:39.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.255 --rc genhtml_branch_coverage=1 00:22:39.255 --rc genhtml_function_coverage=1 00:22:39.255 --rc genhtml_legend=1 00:22:39.255 --rc geninfo_all_blocks=1 00:22:39.255 --rc geninfo_unexecuted_blocks=1 00:22:39.255 00:22:39.255 ' 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:39.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.255 --rc genhtml_branch_coverage=1 00:22:39.255 --rc genhtml_function_coverage=1 00:22:39.255 --rc genhtml_legend=1 00:22:39.255 --rc geninfo_all_blocks=1 00:22:39.255 --rc geninfo_unexecuted_blocks=1 00:22:39.255 00:22:39.255 ' 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:39.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.255 --rc genhtml_branch_coverage=1 00:22:39.255 --rc genhtml_function_coverage=1 00:22:39.255 --rc genhtml_legend=1 00:22:39.255 --rc geninfo_all_blocks=1 00:22:39.255 --rc geninfo_unexecuted_blocks=1 00:22:39.255 00:22:39.255 ' 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:39.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.255 --rc genhtml_branch_coverage=1 00:22:39.255 --rc genhtml_function_coverage=1 00:22:39.255 --rc genhtml_legend=1 00:22:39.255 --rc geninfo_all_blocks=1 00:22:39.255 --rc geninfo_unexecuted_blocks=1 00:22:39.255 00:22:39.255 ' 00:22:39.255 16:37:15 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:39.255 16:37:15 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:22:39.255 16:37:15 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:22:39.255 16:37:15 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67068 00:22:39.255 16:37:15 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:22:39.255 16:37:15 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:22:39.255 16:37:15 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67068 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 67068 ']' 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:39.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:39.255 16:37:15 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:39.514 [2024-10-17 16:37:15.553287] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:22:39.514 [2024-10-17 16:37:15.553421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67068 ] 00:22:39.514 [2024-10-17 16:37:15.728924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:39.773 [2024-10-17 16:37:15.857655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.773 [2024-10-17 16:37:15.857686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.708 16:37:16 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:40.708 16:37:16 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:22:40.708 16:37:16 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:22:40.967 Nvme0n1 00:22:40.967 16:37:17 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:22:40.967 16:37:17 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:22:41.224 request: 00:22:41.224 { 00:22:41.224 "bdev_name": "Nvme0n1", 00:22:41.224 "filename": "non_existing_file", 00:22:41.224 "method": "bdev_nvme_apply_firmware", 00:22:41.224 "req_id": 1 00:22:41.224 } 00:22:41.224 Got JSON-RPC error response 00:22:41.224 response: 00:22:41.224 { 00:22:41.224 "code": -32603, 00:22:41.224 "message": "open file failed." 00:22:41.224 } 00:22:41.224 16:37:17 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:22:41.224 16:37:17 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:22:41.224 16:37:17 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:22:41.224 16:37:17 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:22:41.224 16:37:17 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67068 00:22:41.224 16:37:17 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 67068 ']' 00:22:41.224 16:37:17 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 67068 00:22:41.224 16:37:17 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:22:41.224 16:37:17 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:41.224 16:37:17 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67068 00:22:41.482 16:37:17 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:41.482 16:37:17 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:41.482 killing process with pid 67068 00:22:41.482 16:37:17 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67068' 00:22:41.482 16:37:17 nvme_rpc -- common/autotest_common.sh@969 -- # kill 67068 00:22:41.482 16:37:17 nvme_rpc -- common/autotest_common.sh@974 -- # wait 67068 00:22:44.014 00:22:44.014 real 0m5.018s 00:22:44.014 user 0m9.162s 00:22:44.014 sys 0m0.822s 00:22:44.014 16:37:20 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:44.014 16:37:20 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:44.014 ************************************ 00:22:44.014 END TEST nvme_rpc 00:22:44.014 ************************************ 00:22:44.014 16:37:20 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:22:44.014 16:37:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:44.014 16:37:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:44.014 16:37:20 -- common/autotest_common.sh@10 -- # set +x 00:22:44.014 ************************************ 00:22:44.014 START TEST nvme_rpc_timeouts 00:22:44.014 ************************************ 00:22:44.014 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:22:44.014 * Looking for test storage... 00:22:44.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:22:44.014 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:44.014 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:22:44.014 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:44.273 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:44.273 16:37:20 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:22:44.273 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:44.273 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:44.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.273 --rc genhtml_branch_coverage=1 00:22:44.273 --rc genhtml_function_coverage=1 00:22:44.273 --rc genhtml_legend=1 00:22:44.273 --rc geninfo_all_blocks=1 00:22:44.273 --rc geninfo_unexecuted_blocks=1 00:22:44.273 00:22:44.273 ' 00:22:44.273 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:44.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.273 --rc genhtml_branch_coverage=1 00:22:44.273 --rc genhtml_function_coverage=1 00:22:44.273 --rc genhtml_legend=1 00:22:44.273 --rc geninfo_all_blocks=1 00:22:44.273 --rc geninfo_unexecuted_blocks=1 00:22:44.273 00:22:44.273 ' 00:22:44.273 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:44.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.273 --rc genhtml_branch_coverage=1 00:22:44.273 --rc genhtml_function_coverage=1 00:22:44.273 --rc genhtml_legend=1 00:22:44.273 --rc geninfo_all_blocks=1 00:22:44.273 --rc geninfo_unexecuted_blocks=1 00:22:44.273 00:22:44.273 ' 00:22:44.273 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:44.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.273 --rc genhtml_branch_coverage=1 00:22:44.273 --rc genhtml_function_coverage=1 00:22:44.273 --rc genhtml_legend=1 00:22:44.273 --rc geninfo_all_blocks=1 00:22:44.273 --rc geninfo_unexecuted_blocks=1 00:22:44.273 00:22:44.273 ' 00:22:44.273 16:37:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:44.273 16:37:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67150 00:22:44.273 16:37:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67150 00:22:44.273 16:37:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67182 00:22:44.273 16:37:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:22:44.273 16:37:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:22:44.273 16:37:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67182 00:22:44.273 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 67182 ']' 00:22:44.273 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.273 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:44.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.273 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.273 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:44.273 16:37:20 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:22:44.273 [2024-10-17 16:37:20.499363] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:22:44.273 [2024-10-17 16:37:20.499486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67182 ] 00:22:44.533 [2024-10-17 16:37:20.673633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:44.533 [2024-10-17 16:37:20.802250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.533 [2024-10-17 16:37:20.802250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.468 16:37:21 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.469 16:37:21 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:22:45.469 Checking default timeout settings: 00:22:45.469 16:37:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:22:45.469 16:37:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:46.035 Making settings changes with rpc: 00:22:46.036 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:22:46.036 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:22:46.036 Check default vs. modified settings: 00:22:46.036 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:22:46.294 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67150 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67150 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:22:46.554 Setting action_on_timeout is changed as expected. 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67150 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67150 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:22:46.554 Setting timeout_us is changed as expected. 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67150 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67150 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:22:46.554 Setting timeout_admin_us is changed as expected. 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67150 /tmp/settings_modified_67150 00:22:46.554 16:37:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67182 00:22:46.554 16:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 67182 ']' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 67182 00:22:46.554 16:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:22:46.554 16:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67182 00:22:46.554 16:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:46.554 16:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:46.554 16:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67182' 00:22:46.554 killing process with pid 67182 00:22:46.554 16:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 67182 00:22:46.554 16:37:22 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 67182 00:22:49.084 RPC TIMEOUT SETTING TEST PASSED. 00:22:49.084 16:37:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:22:49.084 00:22:49.084 real 0m5.185s 00:22:49.084 user 0m9.847s 00:22:49.084 sys 0m0.813s 00:22:49.084 16:37:25 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:49.084 16:37:25 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:22:49.084 ************************************ 00:22:49.084 END TEST nvme_rpc_timeouts 00:22:49.084 ************************************ 00:22:49.342 16:37:25 -- spdk/autotest.sh@239 -- # uname -s 00:22:49.342 16:37:25 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:22:49.342 16:37:25 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:22:49.342 16:37:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:49.342 16:37:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:49.342 16:37:25 -- common/autotest_common.sh@10 -- # set +x 00:22:49.342 ************************************ 00:22:49.342 START TEST sw_hotplug 00:22:49.342 ************************************ 00:22:49.342 16:37:25 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:22:49.342 * Looking for test storage... 00:22:49.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:22:49.342 16:37:25 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:49.342 16:37:25 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:22:49.342 16:37:25 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:49.342 16:37:25 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.342 16:37:25 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:22:49.601 16:37:25 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.601 16:37:25 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:22:49.601 16:37:25 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:22:49.601 16:37:25 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.601 16:37:25 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:22:49.601 16:37:25 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.601 16:37:25 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.601 16:37:25 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.601 16:37:25 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:22:49.601 16:37:25 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.601 16:37:25 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:49.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.601 --rc genhtml_branch_coverage=1 00:22:49.601 --rc genhtml_function_coverage=1 00:22:49.601 --rc genhtml_legend=1 00:22:49.601 --rc geninfo_all_blocks=1 00:22:49.601 --rc geninfo_unexecuted_blocks=1 00:22:49.601 00:22:49.601 ' 00:22:49.601 16:37:25 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:49.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.601 --rc genhtml_branch_coverage=1 00:22:49.601 --rc genhtml_function_coverage=1 00:22:49.601 --rc genhtml_legend=1 00:22:49.601 --rc geninfo_all_blocks=1 00:22:49.601 --rc geninfo_unexecuted_blocks=1 00:22:49.601 00:22:49.601 ' 00:22:49.601 16:37:25 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:49.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.601 --rc genhtml_branch_coverage=1 00:22:49.601 --rc genhtml_function_coverage=1 00:22:49.601 --rc genhtml_legend=1 00:22:49.601 --rc geninfo_all_blocks=1 00:22:49.601 --rc geninfo_unexecuted_blocks=1 00:22:49.601 00:22:49.601 ' 00:22:49.601 16:37:25 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:49.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.601 --rc genhtml_branch_coverage=1 00:22:49.601 --rc genhtml_function_coverage=1 00:22:49.601 --rc genhtml_legend=1 00:22:49.601 --rc geninfo_all_blocks=1 00:22:49.601 --rc geninfo_unexecuted_blocks=1 00:22:49.601 00:22:49.601 ' 00:22:49.601 16:37:25 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:50.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:50.170 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:50.170 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:50.170 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:50.170 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:50.429 16:37:26 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:22:50.429 16:37:26 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:22:50.429 16:37:26 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:22:50.429 16:37:26 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@233 -- # local class 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@18 -- # local i 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@18 -- # local i 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@18 -- # local i 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@18 -- # local i 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:22:50.429 16:37:26 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:22:50.429 16:37:26 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:22:50.429 16:37:26 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:22:50.429 16:37:26 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:50.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:51.255 Waiting for block devices as requested 00:22:51.255 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:51.255 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:51.515 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:22:51.515 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:22:56.833 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:22:56.833 16:37:32 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:22:56.833 16:37:32 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:57.401 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:22:57.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:57.401 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:22:57.660 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:22:58.228 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:58.228 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:58.228 16:37:34 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:22:58.228 16:37:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:22:58.228 16:37:34 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:22:58.228 16:37:34 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:22:58.228 16:37:34 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68074 00:22:58.228 16:37:34 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:22:58.228 16:37:34 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:22:58.228 16:37:34 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:22:58.228 16:37:34 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:22:58.228 16:37:34 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:22:58.228 16:37:34 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:22:58.228 16:37:34 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:22:58.228 16:37:34 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:22:58.228 16:37:34 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:22:58.228 16:37:34 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:22:58.228 16:37:34 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:22:58.228 16:37:34 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:22:58.228 16:37:34 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:22:58.228 16:37:34 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:22:58.488 Initializing NVMe Controllers 00:22:58.488 Attaching to 0000:00:10.0 00:22:58.488 Attaching to 0000:00:11.0 00:22:58.488 Attached to 0000:00:11.0 00:22:58.488 Attached to 0000:00:10.0 00:22:58.488 Initialization complete. Starting I/O... 00:22:58.488 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:22:58.488 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:22:58.488 00:22:59.424 QEMU NVMe Ctrl (12341 ): 1366 I/Os completed (+1366) 00:22:59.424 QEMU NVMe Ctrl (12340 ): 1363 I/Os completed (+1363) 00:22:59.424 00:23:00.804 QEMU NVMe Ctrl (12341 ): 3014 I/Os completed (+1648) 00:23:00.805 QEMU NVMe Ctrl (12340 ): 3011 I/Os completed (+1648) 00:23:00.805 00:23:01.740 QEMU NVMe Ctrl (12341 ): 4751 I/Os completed (+1737) 00:23:01.740 QEMU NVMe Ctrl (12340 ): 4744 I/Os completed (+1733) 00:23:01.740 00:23:02.674 QEMU NVMe Ctrl (12341 ): 6550 I/Os completed (+1799) 00:23:02.674 QEMU NVMe Ctrl (12340 ): 6648 I/Os completed (+1904) 00:23:02.674 00:23:03.609 QEMU NVMe Ctrl (12341 ): 8359 I/Os completed (+1809) 00:23:03.609 QEMU NVMe Ctrl (12340 ): 8473 I/Os completed (+1825) 00:23:03.609 00:23:04.545 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:04.545 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:04.545 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:04.545 [2024-10-17 16:37:40.484665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:23:04.545 Controller removed: QEMU NVMe Ctrl (12340 ) 00:23:04.545 [2024-10-17 16:37:40.487299] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.545 [2024-10-17 16:37:40.487428] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.545 [2024-10-17 16:37:40.487498] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.545 [2024-10-17 16:37:40.487562] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.545 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:23:04.545 [2024-10-17 16:37:40.490571] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.545 [2024-10-17 16:37:40.490743] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.545 [2024-10-17 16:37:40.490818] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.545 [2024-10-17 16:37:40.490880] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.545 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:04.545 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:04.545 [2024-10-17 16:37:40.524221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:23:04.545 Controller removed: QEMU NVMe Ctrl (12341 ) 00:23:04.545 [2024-10-17 16:37:40.526070] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.546 [2024-10-17 16:37:40.526196] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.546 [2024-10-17 16:37:40.526275] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.546 [2024-10-17 16:37:40.526337] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.546 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:23:04.546 [2024-10-17 16:37:40.529315] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.546 [2024-10-17 16:37:40.529433] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.546 [2024-10-17 16:37:40.529461] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.546 [2024-10-17 16:37:40.529482] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:04.546 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:23:04.546 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:04.546 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:04.546 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:04.546 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:04.546 00:23:04.546 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:04.546 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:04.546 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:04.546 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:04.546 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:23:04.546 Attaching to 0000:00:10.0 00:23:04.546 Attached to 0000:00:10.0 00:23:04.805 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:23:04.805 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:04.805 16:37:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:04.805 Attaching to 0000:00:11.0 00:23:04.805 Attached to 0000:00:11.0 00:23:05.742 QEMU NVMe Ctrl (12340 ): 1764 I/Os completed (+1764) 00:23:05.742 QEMU NVMe Ctrl (12341 ): 1556 I/Os completed (+1556) 00:23:05.742 00:23:06.679 QEMU NVMe Ctrl (12340 ): 3772 I/Os completed (+2008) 00:23:06.679 QEMU NVMe Ctrl (12341 ): 3569 I/Os completed (+2013) 00:23:06.679 00:23:07.616 QEMU NVMe Ctrl (12340 ): 5760 I/Os completed (+1988) 00:23:07.616 QEMU NVMe Ctrl (12341 ): 5559 I/Os completed (+1990) 00:23:07.616 00:23:08.551 QEMU NVMe Ctrl (12340 ): 7751 I/Os completed (+1991) 00:23:08.551 QEMU NVMe Ctrl (12341 ): 7547 I/Os completed (+1988) 00:23:08.551 00:23:09.488 QEMU NVMe Ctrl (12340 ): 9881 I/Os completed (+2130) 00:23:09.488 QEMU NVMe Ctrl (12341 ): 9671 I/Os completed (+2124) 00:23:09.488 00:23:10.424 QEMU NVMe Ctrl (12340 ): 12037 I/Os completed (+2156) 00:23:10.424 QEMU NVMe Ctrl (12341 ): 11827 I/Os completed (+2156) 00:23:10.424 00:23:11.802 QEMU NVMe Ctrl (12340 ): 14177 I/Os completed (+2140) 00:23:11.802 QEMU NVMe Ctrl (12341 ): 13967 I/Os completed (+2140) 00:23:11.802 00:23:12.738 QEMU NVMe Ctrl (12340 ): 16201 I/Os completed (+2024) 00:23:12.738 QEMU NVMe Ctrl (12341 ): 15991 I/Os completed (+2024) 00:23:12.738 00:23:13.676 QEMU NVMe Ctrl (12340 ): 18129 I/Os completed (+1928) 00:23:13.676 QEMU NVMe Ctrl (12341 ): 17920 I/Os completed (+1929) 00:23:13.676 00:23:14.611 QEMU NVMe Ctrl (12340 ): 20037 I/Os completed (+1908) 00:23:14.611 QEMU NVMe Ctrl (12341 ): 19829 I/Os completed (+1909) 00:23:14.611 00:23:15.547 QEMU NVMe Ctrl (12340 ): 22021 I/Os completed (+1984) 00:23:15.547 QEMU NVMe Ctrl (12341 ): 21815 I/Os completed (+1986) 00:23:15.547 00:23:16.484 QEMU NVMe Ctrl (12340 ): 24117 I/Os completed (+2096) 00:23:16.484 QEMU NVMe Ctrl (12341 ): 23913 I/Os completed (+2098) 00:23:16.484 00:23:16.744 16:37:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:23:16.744 16:37:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:16.744 16:37:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:16.744 16:37:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:16.744 [2024-10-17 16:37:52.881692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:23:16.744 Controller removed: QEMU NVMe Ctrl (12340 ) 00:23:16.744 [2024-10-17 16:37:52.883530] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 [2024-10-17 16:37:52.883589] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 [2024-10-17 16:37:52.883612] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 [2024-10-17 16:37:52.883635] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:23:16.744 [2024-10-17 16:37:52.886693] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 [2024-10-17 16:37:52.886756] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 [2024-10-17 16:37:52.886775] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 [2024-10-17 16:37:52.886795] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 EAL: Cannot open sysfs resource 00:23:16.744 EAL: pci_scan_one(): cannot parse resource 00:23:16.744 EAL: Scan for (pci) bus failed. 00:23:16.744 16:37:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:16.744 16:37:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:16.744 [2024-10-17 16:37:52.923931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:23:16.744 Controller removed: QEMU NVMe Ctrl (12341 ) 00:23:16.744 [2024-10-17 16:37:52.925674] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 [2024-10-17 16:37:52.925738] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 [2024-10-17 16:37:52.925765] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 [2024-10-17 16:37:52.925785] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:23:16.744 [2024-10-17 16:37:52.928544] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 [2024-10-17 16:37:52.928588] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 [2024-10-17 16:37:52.928610] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 [2024-10-17 16:37:52.928631] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:16.744 16:37:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:23:16.744 16:37:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:16.744 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:23:16.744 EAL: Scan for (pci) bus failed. 00:23:17.003 16:37:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:17.003 16:37:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:17.003 16:37:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:17.003 16:37:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:17.003 16:37:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:17.003 16:37:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:17.003 16:37:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:17.003 16:37:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:23:17.003 Attaching to 0000:00:10.0 00:23:17.003 Attached to 0000:00:10.0 00:23:17.003 16:37:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:23:17.003 16:37:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:17.003 16:37:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:17.003 Attaching to 0000:00:11.0 00:23:17.003 Attached to 0000:00:11.0 00:23:17.571 QEMU NVMe Ctrl (12340 ): 1016 I/Os completed (+1016) 00:23:17.571 QEMU NVMe Ctrl (12341 ): 796 I/Os completed (+796) 00:23:17.571 00:23:18.507 QEMU NVMe Ctrl (12340 ): 3040 I/Os completed (+2024) 00:23:18.507 QEMU NVMe Ctrl (12341 ): 2820 I/Os completed (+2024) 00:23:18.507 00:23:19.444 QEMU NVMe Ctrl (12340 ): 5040 I/Os completed (+2000) 00:23:19.444 QEMU NVMe Ctrl (12341 ): 4822 I/Os completed (+2002) 00:23:19.444 00:23:20.825 QEMU NVMe Ctrl (12340 ): 7040 I/Os completed (+2000) 00:23:20.825 QEMU NVMe Ctrl (12341 ): 6822 I/Os completed (+2000) 00:23:20.825 00:23:21.400 QEMU NVMe Ctrl (12340 ): 8916 I/Os completed (+1876) 00:23:21.400 QEMU NVMe Ctrl (12341 ): 8701 I/Os completed (+1879) 00:23:21.400 00:23:22.774 QEMU NVMe Ctrl (12340 ): 10667 I/Os completed (+1751) 00:23:22.774 QEMU NVMe Ctrl (12341 ): 10455 I/Os completed (+1754) 00:23:22.774 00:23:23.713 QEMU NVMe Ctrl (12340 ): 12515 I/Os completed (+1848) 00:23:23.713 QEMU NVMe Ctrl (12341 ): 12306 I/Os completed (+1851) 00:23:23.713 00:23:24.648 QEMU NVMe Ctrl (12340 ): 14370 I/Os completed (+1855) 00:23:24.648 QEMU NVMe Ctrl (12341 ): 14161 I/Os completed (+1855) 00:23:24.648 00:23:25.584 QEMU NVMe Ctrl (12340 ): 16218 I/Os completed (+1848) 00:23:25.584 QEMU NVMe Ctrl (12341 ): 16009 I/Os completed (+1848) 00:23:25.584 00:23:26.518 QEMU NVMe Ctrl (12340 ): 18085 I/Os completed (+1867) 00:23:26.518 QEMU NVMe Ctrl (12341 ): 17881 I/Os completed (+1872) 00:23:26.518 00:23:27.453 QEMU NVMe Ctrl (12340 ): 20005 I/Os completed (+1920) 00:23:27.453 QEMU NVMe Ctrl (12341 ): 19802 I/Os completed (+1921) 00:23:27.453 00:23:28.421 QEMU NVMe Ctrl (12340 ): 21897 I/Os completed (+1892) 00:23:28.421 QEMU NVMe Ctrl (12341 ): 21696 I/Os completed (+1894) 00:23:28.421 00:23:28.989 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:23:28.989 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:28.989 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:28.989 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:29.247 [2024-10-17 16:38:05.283987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:23:29.247 Controller removed: QEMU NVMe Ctrl (12340 ) 00:23:29.247 [2024-10-17 16:38:05.285748] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 [2024-10-17 16:38:05.285802] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 [2024-10-17 16:38:05.285825] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 [2024-10-17 16:38:05.285848] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:23:29.247 [2024-10-17 16:38:05.288720] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 [2024-10-17 16:38:05.288766] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 [2024-10-17 16:38:05.288784] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 [2024-10-17 16:38:05.288803] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:29.247 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:29.247 [2024-10-17 16:38:05.324176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:23:29.247 Controller removed: QEMU NVMe Ctrl (12341 ) 00:23:29.247 [2024-10-17 16:38:05.325749] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 [2024-10-17 16:38:05.325796] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 [2024-10-17 16:38:05.325819] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 [2024-10-17 16:38:05.325838] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:23:29.247 [2024-10-17 16:38:05.328370] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 [2024-10-17 16:38:05.328409] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 [2024-10-17 16:38:05.328432] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 [2024-10-17 16:38:05.328449] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:29.247 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:23:29.247 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:29.247 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:29.247 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:29.247 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:29.247 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:29.247 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:29.247 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:29.247 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:29.247 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:23:29.506 Attaching to 0000:00:10.0 00:23:29.506 Attached to 0000:00:10.0 00:23:29.506 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:23:29.506 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:29.506 16:38:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:23:29.506 Attaching to 0000:00:11.0 00:23:29.506 Attached to 0000:00:11.0 00:23:29.506 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:23:29.506 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:23:29.506 [2024-10-17 16:38:05.659202] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:23:41.719 16:38:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:23:41.719 16:38:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:41.719 16:38:17 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.17 00:23:41.719 16:38:17 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.17 00:23:41.719 16:38:17 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:23:41.719 16:38:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.17 00:23:41.719 16:38:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.17 2 00:23:41.719 remove_attach_helper took 43.17s to complete (handling 2 nvme drive(s)) 16:38:17 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:23:48.295 16:38:23 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68074 00:23:48.295 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68074) - No such process 00:23:48.295 16:38:23 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68074 00:23:48.295 16:38:23 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:23:48.295 16:38:23 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:23:48.295 16:38:23 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:23:48.295 16:38:23 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68623 00:23:48.295 16:38:23 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:48.295 16:38:23 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:23:48.295 16:38:23 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68623 00:23:48.295 16:38:23 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 68623 ']' 00:23:48.295 16:38:23 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.295 16:38:23 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.295 16:38:23 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.295 16:38:23 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.295 16:38:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:48.295 [2024-10-17 16:38:23.770302] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:23:48.295 [2024-10-17 16:38:23.770437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68623 ] 00:23:48.295 [2024-10-17 16:38:23.944413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.295 [2024-10-17 16:38:24.059623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.862 16:38:24 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:48.862 16:38:24 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:23:48.862 16:38:24 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:23:48.862 16:38:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.862 16:38:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:48.862 16:38:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.862 16:38:24 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:23:48.862 16:38:24 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:23:48.862 16:38:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:23:48.862 16:38:24 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:23:48.862 16:38:24 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:23:48.862 16:38:24 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:23:48.862 16:38:24 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:23:48.862 16:38:24 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:23:48.862 16:38:24 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:23:48.862 16:38:24 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:23:48.862 16:38:24 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:23:48.862 16:38:24 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:23:48.862 16:38:24 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:23:55.427 16:38:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:23:55.427 16:38:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:55.427 16:38:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:55.427 16:38:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:23:55.427 16:38:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:23:55.427 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:23:55.427 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:55.427 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:55.427 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:55.427 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:55.427 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:55.427 16:38:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.427 16:38:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:55.427 [2024-10-17 16:38:31.015525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:23:55.427 [2024-10-17 16:38:31.018267] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:55.427 [2024-10-17 16:38:31.018314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.427 [2024-10-17 16:38:31.018351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.427 [2024-10-17 16:38:31.018395] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:55.427 [2024-10-17 16:38:31.018423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.427 [2024-10-17 16:38:31.018439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.427 [2024-10-17 16:38:31.018455] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:55.427 [2024-10-17 16:38:31.018469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.427 [2024-10-17 16:38:31.018481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.427 [2024-10-17 16:38:31.018502] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:55.427 [2024-10-17 16:38:31.018514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.427 [2024-10-17 16:38:31.018529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.427 16:38:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.428 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:23:55.428 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:23:55.428 [2024-10-17 16:38:31.414889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:23:55.428 [2024-10-17 16:38:31.417516] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:55.428 [2024-10-17 16:38:31.417560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.428 [2024-10-17 16:38:31.417580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.428 [2024-10-17 16:38:31.417603] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:55.428 [2024-10-17 16:38:31.417618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.428 [2024-10-17 16:38:31.417630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.428 [2024-10-17 16:38:31.417646] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:55.428 [2024-10-17 16:38:31.417657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.428 [2024-10-17 16:38:31.417671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.428 [2024-10-17 16:38:31.417683] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:23:55.428 [2024-10-17 16:38:31.417708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:23:55.428 [2024-10-17 16:38:31.417720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.428 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:23:55.428 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:23:55.428 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:23:55.428 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:23:55.428 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:23:55.428 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:55.428 16:38:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.428 16:38:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:23:55.428 16:38:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.428 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:23:55.428 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:23:55.686 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:55.686 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:55.686 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:23:55.686 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:23:55.686 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:55.686 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:23:55.686 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:23:55.686 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:23:55.686 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:23:55.686 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:23:55.686 16:38:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:24:07.893 16:38:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:24:07.893 16:38:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:24:07.893 16:38:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:24:07.893 16:38:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:07.893 16:38:43 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.893 16:38:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:07.893 16:38:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:07.893 16:38:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:07.893 16:38:43 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.893 16:38:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:24:07.893 16:38:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:07.893 16:38:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:07.893 16:38:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:07.893 [2024-10-17 16:38:43.994692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:24:07.893 [2024-10-17 16:38:43.997803] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.893 [2024-10-17 16:38:43.997849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.893 [2024-10-17 16:38:43.997867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.893 [2024-10-17 16:38:43.997896] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.893 [2024-10-17 16:38:43.997908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.893 [2024-10-17 16:38:43.997924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.893 [2024-10-17 16:38:43.997937] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.893 [2024-10-17 16:38:43.997951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.893 [2024-10-17 16:38:43.997964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.893 [2024-10-17 16:38:43.997980] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:07.893 [2024-10-17 16:38:43.997991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:24:07.893 [2024-10-17 16:38:43.998007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:07.893 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:07.893 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:07.893 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:24:07.893 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:07.893 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:07.893 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:07.893 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:07.893 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:07.893 16:38:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.893 16:38:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:07.893 16:38:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.893 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:24:07.893 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:24:08.462 [2024-10-17 16:38:44.493901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:24:08.462 [2024-10-17 16:38:44.496692] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:08.462 [2024-10-17 16:38:44.496763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.462 [2024-10-17 16:38:44.496803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.462 [2024-10-17 16:38:44.496841] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:08.462 [2024-10-17 16:38:44.496856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.462 [2024-10-17 16:38:44.496869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.462 [2024-10-17 16:38:44.496886] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:08.462 [2024-10-17 16:38:44.496898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.462 [2024-10-17 16:38:44.496913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.462 [2024-10-17 16:38:44.496926] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:08.462 [2024-10-17 16:38:44.496941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.462 [2024-10-17 16:38:44.496953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.462 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:24:08.462 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:08.462 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:08.462 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:08.462 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:08.462 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:08.462 16:38:44 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.462 16:38:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:08.462 16:38:44 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.462 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:24:08.462 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:24:08.721 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:08.721 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:08.721 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:24:08.721 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:24:08.721 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:08.721 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:08.721 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:08.721 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:24:08.721 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:24:08.721 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:08.721 16:38:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:24:20.978 16:38:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:24:20.978 16:38:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:24:20.978 16:38:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:24:20.978 16:38:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:20.978 16:38:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:20.978 16:38:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:20.978 16:38:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.978 16:38:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:20.978 16:38:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.978 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:24:20.978 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:20.978 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:20.978 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:20.978 [2024-10-17 16:38:57.073695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:24:20.978 [2024-10-17 16:38:57.076688] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:20.978 [2024-10-17 16:38:57.076755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.978 [2024-10-17 16:38:57.076773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.978 [2024-10-17 16:38:57.076799] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:20.978 [2024-10-17 16:38:57.076811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.978 [2024-10-17 16:38:57.076828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.978 [2024-10-17 16:38:57.076842] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:20.978 [2024-10-17 16:38:57.076856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.978 [2024-10-17 16:38:57.076867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.978 [2024-10-17 16:38:57.076889] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:20.978 [2024-10-17 16:38:57.076900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.978 [2024-10-17 16:38:57.076917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.978 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:20.979 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:20.979 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:24:20.979 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:20.979 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:20.979 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:20.979 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:20.979 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:20.979 16:38:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.979 16:38:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:20.979 16:38:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.979 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:24:20.979 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:24:21.546 [2024-10-17 16:38:57.572929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:24:21.546 [2024-10-17 16:38:57.575594] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:21.546 [2024-10-17 16:38:57.575653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.546 [2024-10-17 16:38:57.575674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.546 [2024-10-17 16:38:57.575710] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:21.546 [2024-10-17 16:38:57.575729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.546 [2024-10-17 16:38:57.575750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.546 [2024-10-17 16:38:57.575769] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:21.546 [2024-10-17 16:38:57.575781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.546 [2024-10-17 16:38:57.575820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.546 [2024-10-17 16:38:57.575834] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:21.546 [2024-10-17 16:38:57.575851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.546 [2024-10-17 16:38:57.575863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.546 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:24:21.546 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:21.546 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:21.546 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:21.546 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:21.546 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:21.546 16:38:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.546 16:38:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:21.546 16:38:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.546 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:24:21.546 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:24:21.546 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:21.805 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:21.805 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:24:21.805 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:24:21.805 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:21.805 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:21.805 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:21.805 16:38:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:24:21.805 16:38:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:24:21.805 16:38:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:21.805 16:38:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.18 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.18 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 00:24:34.020 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:24:34.020 16:39:10 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:24:34.020 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:24:34.021 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:24:34.021 16:39:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:24:40.594 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:40.594 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:40.594 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:40.595 [2024-10-17 16:39:16.238189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:24:40.595 [2024-10-17 16:39:16.240836] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.595 [2024-10-17 16:39:16.240991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.595 [2024-10-17 16:39:16.241128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.595 [2024-10-17 16:39:16.241245] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.595 [2024-10-17 16:39:16.241282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.595 [2024-10-17 16:39:16.241335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.595 [2024-10-17 16:39:16.241466] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.595 [2024-10-17 16:39:16.241487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.595 [2024-10-17 16:39:16.241500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.595 [2024-10-17 16:39:16.241516] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.595 [2024-10-17 16:39:16.241527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.595 [2024-10-17 16:39:16.241544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:40.595 16:39:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.595 16:39:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:40.595 16:39:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:24:40.595 [2024-10-17 16:39:16.637540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:24:40.595 [2024-10-17 16:39:16.639371] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.595 [2024-10-17 16:39:16.639415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.595 [2024-10-17 16:39:16.639439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.595 [2024-10-17 16:39:16.639465] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.595 [2024-10-17 16:39:16.639480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.595 [2024-10-17 16:39:16.639492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.595 [2024-10-17 16:39:16.639507] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.595 [2024-10-17 16:39:16.639519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.595 [2024-10-17 16:39:16.639533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.595 [2024-10-17 16:39:16.639546] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:40.595 [2024-10-17 16:39:16.639560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.595 [2024-10-17 16:39:16.639572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:40.595 16:39:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.595 16:39:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:40.595 16:39:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:24:40.595 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:24:40.854 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:40.854 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:40.854 16:39:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:24:40.854 16:39:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:24:40.854 16:39:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:40.854 16:39:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:40.854 16:39:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:40.854 16:39:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:24:41.113 16:39:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:24:41.113 16:39:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:41.113 16:39:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:53.324 16:39:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.324 16:39:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:53.324 16:39:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:24:53.324 [2024-10-17 16:39:29.317196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:24:53.324 [2024-10-17 16:39:29.319917] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.324 [2024-10-17 16:39:29.320003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.324 [2024-10-17 16:39:29.320067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.324 [2024-10-17 16:39:29.320097] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.324 [2024-10-17 16:39:29.320109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.324 [2024-10-17 16:39:29.320124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.324 [2024-10-17 16:39:29.320138] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.324 [2024-10-17 16:39:29.320154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.324 [2024-10-17 16:39:29.320165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.324 [2024-10-17 16:39:29.320180] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.324 [2024-10-17 16:39:29.320191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.324 [2024-10-17 16:39:29.320205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:53.324 16:39:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.324 16:39:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:53.324 16:39:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:24:53.324 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:24:53.584 [2024-10-17 16:39:29.716541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:24:53.584 [2024-10-17 16:39:29.718383] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.584 [2024-10-17 16:39:29.718425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.584 [2024-10-17 16:39:29.718444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.584 [2024-10-17 16:39:29.718467] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.584 [2024-10-17 16:39:29.718484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.584 [2024-10-17 16:39:29.718497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.584 [2024-10-17 16:39:29.718513] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.584 [2024-10-17 16:39:29.718524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.584 [2024-10-17 16:39:29.718539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.584 [2024-10-17 16:39:29.718551] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:24:53.584 [2024-10-17 16:39:29.718565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:24:53.584 [2024-10-17 16:39:29.718577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:53.843 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:24:53.843 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:24:53.843 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:24:53.843 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:24:53.843 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:24:53.843 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:24:53.843 16:39:29 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.843 16:39:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:24:53.843 16:39:29 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.843 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:24:53.843 16:39:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:24:53.843 16:39:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:53.843 16:39:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:53.843 16:39:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:24:54.102 16:39:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:24:54.102 16:39:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:54.102 16:39:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:24:54.102 16:39:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:24:54.102 16:39:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:24:54.102 16:39:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:24:54.102 16:39:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:24:54.102 16:39:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:06.312 16:39:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.312 16:39:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:06.312 16:39:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:25:06.312 [2024-10-17 16:39:42.396225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:25:06.312 [2024-10-17 16:39:42.399212] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:06.312 [2024-10-17 16:39:42.399449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.312 [2024-10-17 16:39:42.399639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.312 [2024-10-17 16:39:42.399732] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:06.312 [2024-10-17 16:39:42.399789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.312 [2024-10-17 16:39:42.399935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.312 [2024-10-17 16:39:42.400001] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:06.312 [2024-10-17 16:39:42.400043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.312 [2024-10-17 16:39:42.400164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.312 [2024-10-17 16:39:42.400234] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:06.312 [2024-10-17 16:39:42.400269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.312 [2024-10-17 16:39:42.400382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:06.312 16:39:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:06.312 16:39:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:06.312 16:39:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:25:06.312 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:25:06.571 [2024-10-17 16:39:42.795580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:25:06.571 [2024-10-17 16:39:42.798278] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:06.571 [2024-10-17 16:39:42.798336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.571 [2024-10-17 16:39:42.798355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.571 [2024-10-17 16:39:42.798378] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:06.571 [2024-10-17 16:39:42.798393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.571 [2024-10-17 16:39:42.798405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.571 [2024-10-17 16:39:42.798420] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:06.571 [2024-10-17 16:39:42.798432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.571 [2024-10-17 16:39:42.798446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.571 [2024-10-17 16:39:42.798459] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:25:06.571 [2024-10-17 16:39:42.798476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.571 [2024-10-17 16:39:42.798488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.855 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:25:06.855 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:25:06.855 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:25:06.855 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:06.855 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:06.855 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:06.855 16:39:42 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.855 16:39:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:06.855 16:39:42 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.855 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:25:06.855 16:39:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:25:06.855 16:39:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:06.855 16:39:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:06.855 16:39:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:25:07.119 16:39:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:25:07.119 16:39:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:07.119 16:39:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:25:07.120 16:39:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:25:07.120 16:39:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:25:07.120 16:39:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:25:07.120 16:39:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:25:07.120 16:39:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:25:19.326 16:39:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:25:19.326 16:39:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:25:19.326 16:39:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:25:19.326 16:39:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:25:19.326 16:39:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:25:19.326 16:39:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:19.326 16:39:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:25:19.326 16:39:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.24 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.24 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:25:19.326 16:39:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.24 00:25:19.326 16:39:55 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.24 2 00:25:19.326 remove_attach_helper took 45.24s to complete (handling 2 nvme drive(s)) 16:39:55 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:25:19.326 16:39:55 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68623 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 68623 ']' 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 68623 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68623 00:25:19.326 killing process with pid 68623 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68623' 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@969 -- # kill 68623 00:25:19.326 16:39:55 sw_hotplug -- common/autotest_common.sh@974 -- # wait 68623 00:25:21.861 16:39:57 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:22.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:22.998 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:22.998 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:22.998 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:22.998 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:23.258 00:25:23.258 real 2m33.933s 00:25:23.258 user 1m51.639s 00:25:23.258 sys 0m22.664s 00:25:23.258 16:39:59 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:23.258 ************************************ 00:25:23.258 END TEST sw_hotplug 00:25:23.258 ************************************ 00:25:23.258 16:39:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:25:23.258 16:39:59 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:25:23.258 16:39:59 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:25:23.258 16:39:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:23.258 16:39:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:23.258 16:39:59 -- common/autotest_common.sh@10 -- # set +x 00:25:23.258 ************************************ 00:25:23.258 START TEST nvme_xnvme 00:25:23.258 ************************************ 00:25:23.258 16:39:59 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:25:23.258 * Looking for test storage... 00:25:23.517 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:25:23.517 16:39:59 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:23.517 16:39:59 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:25:23.517 16:39:59 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:23.517 16:39:59 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.517 16:39:59 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:25:23.517 16:39:59 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.517 16:39:59 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:23.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.517 --rc genhtml_branch_coverage=1 00:25:23.517 --rc genhtml_function_coverage=1 00:25:23.517 --rc genhtml_legend=1 00:25:23.517 --rc geninfo_all_blocks=1 00:25:23.517 --rc geninfo_unexecuted_blocks=1 00:25:23.517 00:25:23.517 ' 00:25:23.517 16:39:59 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:23.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.517 --rc genhtml_branch_coverage=1 00:25:23.517 --rc genhtml_function_coverage=1 00:25:23.517 --rc genhtml_legend=1 00:25:23.517 --rc geninfo_all_blocks=1 00:25:23.517 --rc geninfo_unexecuted_blocks=1 00:25:23.517 00:25:23.517 ' 00:25:23.517 16:39:59 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:23.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.517 --rc genhtml_branch_coverage=1 00:25:23.517 --rc genhtml_function_coverage=1 00:25:23.517 --rc genhtml_legend=1 00:25:23.517 --rc geninfo_all_blocks=1 00:25:23.517 --rc geninfo_unexecuted_blocks=1 00:25:23.517 00:25:23.517 ' 00:25:23.517 16:39:59 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:23.517 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.517 --rc genhtml_branch_coverage=1 00:25:23.517 --rc genhtml_function_coverage=1 00:25:23.518 --rc genhtml_legend=1 00:25:23.518 --rc geninfo_all_blocks=1 00:25:23.518 --rc geninfo_unexecuted_blocks=1 00:25:23.518 00:25:23.518 ' 00:25:23.518 16:39:59 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:23.518 16:39:59 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.518 16:39:59 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.518 16:39:59 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.518 16:39:59 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.518 16:39:59 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.518 16:39:59 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.518 16:39:59 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.518 16:39:59 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:25:23.518 16:39:59 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.518 16:39:59 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:25:23.518 16:39:59 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:23.518 16:39:59 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:23.518 16:39:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:25:23.518 ************************************ 00:25:23.518 START TEST xnvme_to_malloc_dd_copy 00:25:23.518 ************************************ 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:25:23.518 16:39:59 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:25:23.518 { 00:25:23.518 "subsystems": [ 00:25:23.518 { 00:25:23.518 "subsystem": "bdev", 00:25:23.518 "config": [ 00:25:23.518 { 00:25:23.518 "params": { 00:25:23.518 "block_size": 512, 00:25:23.518 "num_blocks": 2097152, 00:25:23.518 "name": "malloc0" 00:25:23.518 }, 00:25:23.518 "method": "bdev_malloc_create" 00:25:23.518 }, 00:25:23.518 { 00:25:23.518 "params": { 00:25:23.518 "io_mechanism": "libaio", 00:25:23.518 "filename": "/dev/nullb0", 00:25:23.518 "name": "null0" 00:25:23.518 }, 00:25:23.518 "method": "bdev_xnvme_create" 00:25:23.518 }, 00:25:23.518 { 00:25:23.518 "method": "bdev_wait_for_examine" 00:25:23.518 } 00:25:23.518 ] 00:25:23.518 } 00:25:23.518 ] 00:25:23.518 } 00:25:23.777 [2024-10-17 16:39:59.819880] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:25:23.777 [2024-10-17 16:39:59.820196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69994 ] 00:25:23.777 [2024-10-17 16:39:59.990884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.036 [2024-10-17 16:40:00.119771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.572  [2024-10-17T16:40:03.805Z] Copying: 250/1024 [MB] (250 MBps) [2024-10-17T16:40:04.741Z] Copying: 498/1024 [MB] (248 MBps) [2024-10-17T16:40:05.677Z] Copying: 747/1024 [MB] (248 MBps) [2024-10-17T16:40:05.971Z] Copying: 995/1024 [MB] (247 MBps) [2024-10-17T16:40:10.159Z] Copying: 1024/1024 [MB] (average 249 MBps) 00:25:33.860 00:25:33.860 16:40:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:25:33.860 16:40:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:25:33.860 16:40:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:25:33.860 16:40:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:25:33.860 { 00:25:33.860 "subsystems": [ 00:25:33.860 { 00:25:33.860 "subsystem": "bdev", 00:25:33.860 "config": [ 00:25:33.860 { 00:25:33.860 "params": { 00:25:33.860 "block_size": 512, 00:25:33.860 "num_blocks": 2097152, 00:25:33.860 "name": "malloc0" 00:25:33.860 }, 00:25:33.860 "method": "bdev_malloc_create" 00:25:33.860 }, 00:25:33.860 { 00:25:33.860 "params": { 00:25:33.860 "io_mechanism": "libaio", 00:25:33.860 "filename": "/dev/nullb0", 00:25:33.860 "name": "null0" 00:25:33.860 }, 00:25:33.860 "method": "bdev_xnvme_create" 00:25:33.860 }, 00:25:33.860 { 00:25:33.860 "method": "bdev_wait_for_examine" 00:25:33.860 } 00:25:33.861 ] 00:25:33.861 } 00:25:33.861 ] 00:25:33.861 } 00:25:33.861 [2024-10-17 16:40:09.849160] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:25:33.861 [2024-10-17 16:40:09.849294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70114 ] 00:25:33.861 [2024-10-17 16:40:10.022729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.861 [2024-10-17 16:40:10.146204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.392  [2024-10-17T16:40:14.069Z] Copying: 246/1024 [MB] (246 MBps) [2024-10-17T16:40:14.637Z] Copying: 495/1024 [MB] (249 MBps) [2024-10-17T16:40:16.011Z] Copying: 740/1024 [MB] (245 MBps) [2024-10-17T16:40:16.011Z] Copying: 998/1024 [MB] (257 MBps) [2024-10-17T16:40:20.302Z] Copying: 1024/1024 [MB] (average 250 MBps) 00:25:44.003 00:25:44.003 16:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:25:44.003 16:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:25:44.003 16:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:25:44.003 16:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:25:44.003 16:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:25:44.003 16:40:19 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:25:44.003 { 00:25:44.003 "subsystems": [ 00:25:44.003 { 00:25:44.003 "subsystem": "bdev", 00:25:44.003 "config": [ 00:25:44.003 { 00:25:44.003 "params": { 00:25:44.003 "block_size": 512, 00:25:44.003 "num_blocks": 2097152, 00:25:44.003 "name": "malloc0" 00:25:44.003 }, 00:25:44.003 "method": "bdev_malloc_create" 00:25:44.003 }, 00:25:44.003 { 00:25:44.003 "params": { 00:25:44.003 "io_mechanism": "io_uring", 00:25:44.003 "filename": "/dev/nullb0", 00:25:44.003 "name": "null0" 00:25:44.003 }, 00:25:44.003 "method": "bdev_xnvme_create" 00:25:44.003 }, 00:25:44.003 { 00:25:44.003 "method": "bdev_wait_for_examine" 00:25:44.003 } 00:25:44.003 ] 00:25:44.003 } 00:25:44.003 ] 00:25:44.003 } 00:25:44.003 [2024-10-17 16:40:19.884319] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:25:44.003 [2024-10-17 16:40:19.884447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70229 ] 00:25:44.003 [2024-10-17 16:40:20.058271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.003 [2024-10-17 16:40:20.185724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.538  [2024-10-17T16:40:23.774Z] Copying: 260/1024 [MB] (260 MBps) [2024-10-17T16:40:24.710Z] Copying: 518/1024 [MB] (258 MBps) [2024-10-17T16:40:25.647Z] Copying: 777/1024 [MB] (258 MBps) [2024-10-17T16:40:29.830Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:25:53.531 00:25:53.531 16:40:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:25:53.531 16:40:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:25:53.531 16:40:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:25:53.531 16:40:29 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:25:53.531 { 00:25:53.531 "subsystems": [ 00:25:53.531 { 00:25:53.531 "subsystem": "bdev", 00:25:53.531 "config": [ 00:25:53.531 { 00:25:53.531 "params": { 00:25:53.531 "block_size": 512, 00:25:53.531 "num_blocks": 2097152, 00:25:53.531 "name": "malloc0" 00:25:53.531 }, 00:25:53.531 "method": "bdev_malloc_create" 00:25:53.531 }, 00:25:53.531 { 00:25:53.531 "params": { 00:25:53.531 "io_mechanism": "io_uring", 00:25:53.531 "filename": "/dev/nullb0", 00:25:53.531 "name": "null0" 00:25:53.531 }, 00:25:53.531 "method": "bdev_xnvme_create" 00:25:53.531 }, 00:25:53.531 { 00:25:53.531 "method": "bdev_wait_for_examine" 00:25:53.531 } 00:25:53.531 ] 00:25:53.531 } 00:25:53.531 ] 00:25:53.531 } 00:25:53.789 [2024-10-17 16:40:29.827907] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:25:53.789 [2024-10-17 16:40:29.828035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70338 ] 00:25:53.789 [2024-10-17 16:40:29.999138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.046 [2024-10-17 16:40:30.136296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.581  [2024-10-17T16:40:33.849Z] Copying: 257/1024 [MB] (257 MBps) [2024-10-17T16:40:34.783Z] Copying: 517/1024 [MB] (259 MBps) [2024-10-17T16:40:35.720Z] Copying: 778/1024 [MB] (261 MBps) [2024-10-17T16:40:39.913Z] Copying: 1024/1024 [MB] (average 260 MBps) 00:26:03.614 00:26:03.614 16:40:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:26:03.614 16:40:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:26:03.614 ************************************ 00:26:03.614 END TEST xnvme_to_malloc_dd_copy 00:26:03.614 ************************************ 00:26:03.614 00:26:03.614 real 0m40.061s 00:26:03.614 user 0m35.201s 00:26:03.614 sys 0m4.322s 00:26:03.614 16:40:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:03.614 16:40:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:26:03.614 16:40:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:26:03.614 16:40:39 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:03.614 16:40:39 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:03.614 16:40:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:03.614 ************************************ 00:26:03.614 START TEST xnvme_bdevperf 00:26:03.614 ************************************ 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:26:03.614 16:40:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:03.614 { 00:26:03.614 "subsystems": [ 00:26:03.614 { 00:26:03.614 "subsystem": "bdev", 00:26:03.614 "config": [ 00:26:03.614 { 00:26:03.614 "params": { 00:26:03.614 "io_mechanism": "libaio", 00:26:03.614 "filename": "/dev/nullb0", 00:26:03.614 "name": "null0" 00:26:03.614 }, 00:26:03.614 "method": "bdev_xnvme_create" 00:26:03.614 }, 00:26:03.614 { 00:26:03.614 "method": "bdev_wait_for_examine" 00:26:03.614 } 00:26:03.614 ] 00:26:03.614 } 00:26:03.614 ] 00:26:03.614 } 00:26:03.873 [2024-10-17 16:40:39.951470] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:26:03.873 [2024-10-17 16:40:39.951776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70475 ] 00:26:03.873 [2024-10-17 16:40:40.126536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.132 [2024-10-17 16:40:40.252111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.391 Running I/O for 5 seconds... 00:26:06.337 145920.00 IOPS, 570.00 MiB/s [2024-10-17T16:40:44.014Z] 147872.00 IOPS, 577.62 MiB/s [2024-10-17T16:40:44.950Z] 148416.00 IOPS, 579.75 MiB/s [2024-10-17T16:40:45.890Z] 149040.00 IOPS, 582.19 MiB/s 00:26:09.591 Latency(us) 00:26:09.591 [2024-10-17T16:40:45.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.591 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:26:09.591 null0 : 5.00 149081.44 582.35 0.00 0.00 426.78 125.02 1868.70 00:26:09.591 [2024-10-17T16:40:45.890Z] =================================================================================================================== 00:26:09.591 [2024-10-17T16:40:45.890Z] Total : 149081.44 582.35 0.00 0.00 426.78 125.02 1868.70 00:26:10.528 16:40:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:26:10.528 16:40:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:26:10.528 16:40:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:26:10.528 16:40:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:26:10.528 16:40:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:26:10.528 16:40:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:10.786 { 00:26:10.786 "subsystems": [ 00:26:10.786 { 00:26:10.786 "subsystem": "bdev", 00:26:10.786 "config": [ 00:26:10.786 { 00:26:10.786 "params": { 00:26:10.786 "io_mechanism": "io_uring", 00:26:10.786 "filename": "/dev/nullb0", 00:26:10.786 "name": "null0" 00:26:10.786 }, 00:26:10.786 "method": "bdev_xnvme_create" 00:26:10.786 }, 00:26:10.786 { 00:26:10.786 "method": "bdev_wait_for_examine" 00:26:10.786 } 00:26:10.786 ] 00:26:10.786 } 00:26:10.786 ] 00:26:10.786 } 00:26:10.786 [2024-10-17 16:40:46.907343] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:26:10.786 [2024-10-17 16:40:46.907678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70555 ] 00:26:10.786 [2024-10-17 16:40:47.082543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.046 [2024-10-17 16:40:47.207318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.305 Running I/O for 5 seconds... 00:26:13.616 189312.00 IOPS, 739.50 MiB/s [2024-10-17T16:40:50.863Z] 190656.00 IOPS, 744.75 MiB/s [2024-10-17T16:40:51.800Z] 190698.67 IOPS, 744.92 MiB/s [2024-10-17T16:40:52.734Z] 190512.00 IOPS, 744.19 MiB/s 00:26:16.435 Latency(us) 00:26:16.435 [2024-10-17T16:40:52.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.435 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:26:16.435 null0 : 5.00 190366.07 743.62 0.00 0.00 333.71 196.58 1816.06 00:26:16.435 [2024-10-17T16:40:52.734Z] =================================================================================================================== 00:26:16.435 [2024-10-17T16:40:52.734Z] Total : 190366.07 743.62 0.00 0.00 333.71 196.58 1816.06 00:26:17.815 16:40:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:26:17.815 16:40:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:26:17.815 00:26:17.815 real 0m13.901s 00:26:17.815 user 0m10.457s 00:26:17.815 sys 0m3.244s 00:26:17.815 ************************************ 00:26:17.815 END TEST xnvme_bdevperf 00:26:17.815 ************************************ 00:26:17.815 16:40:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:17.815 16:40:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.815 ************************************ 00:26:17.815 END TEST nvme_xnvme 00:26:17.815 ************************************ 00:26:17.815 00:26:17.815 real 0m54.357s 00:26:17.815 user 0m45.822s 00:26:17.815 sys 0m7.804s 00:26:17.815 16:40:53 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:17.815 16:40:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:17.815 16:40:53 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:26:17.815 16:40:53 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:17.815 16:40:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:17.815 16:40:53 -- common/autotest_common.sh@10 -- # set +x 00:26:17.815 ************************************ 00:26:17.815 START TEST blockdev_xnvme 00:26:17.816 ************************************ 00:26:17.816 16:40:53 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:26:17.816 * Looking for test storage... 00:26:17.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:26:17.816 16:40:53 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:17.816 16:40:53 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:26:17.816 16:40:53 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:17.816 16:40:54 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.816 16:40:54 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:26:17.816 16:40:54 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.816 16:40:54 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:17.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.816 --rc genhtml_branch_coverage=1 00:26:17.816 --rc genhtml_function_coverage=1 00:26:17.816 --rc genhtml_legend=1 00:26:17.816 --rc geninfo_all_blocks=1 00:26:17.816 --rc geninfo_unexecuted_blocks=1 00:26:17.816 00:26:17.816 ' 00:26:17.816 16:40:54 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:17.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.816 --rc genhtml_branch_coverage=1 00:26:17.816 --rc genhtml_function_coverage=1 00:26:17.816 --rc genhtml_legend=1 00:26:17.816 --rc geninfo_all_blocks=1 00:26:17.816 --rc geninfo_unexecuted_blocks=1 00:26:17.816 00:26:17.816 ' 00:26:17.816 16:40:54 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:17.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.816 --rc genhtml_branch_coverage=1 00:26:17.816 --rc genhtml_function_coverage=1 00:26:17.816 --rc genhtml_legend=1 00:26:17.816 --rc geninfo_all_blocks=1 00:26:17.816 --rc geninfo_unexecuted_blocks=1 00:26:17.816 00:26:17.816 ' 00:26:17.816 16:40:54 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:17.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.816 --rc genhtml_branch_coverage=1 00:26:17.816 --rc genhtml_function_coverage=1 00:26:17.816 --rc genhtml_legend=1 00:26:17.816 --rc geninfo_all_blocks=1 00:26:17.816 --rc geninfo_unexecuted_blocks=1 00:26:17.816 00:26:17.816 ' 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=70709 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:26:17.816 16:40:54 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 70709 00:26:17.816 16:40:54 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 70709 ']' 00:26:17.816 16:40:54 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.816 16:40:54 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:17.816 16:40:54 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.816 16:40:54 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:17.816 16:40:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:18.075 [2024-10-17 16:40:54.203763] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:26:18.075 [2024-10-17 16:40:54.204073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70709 ] 00:26:18.334 [2024-10-17 16:40:54.377408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.334 [2024-10-17 16:40:54.495219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.271 16:40:55 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.271 16:40:55 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:26:19.271 16:40:55 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:26:19.271 16:40:55 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:26:19.271 16:40:55 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:26:19.271 16:40:55 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:26:19.271 16:40:55 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:19.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:20.099 Waiting for block devices as requested 00:26:20.099 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:20.099 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:20.359 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:26:20.359 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:26:25.630 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:25.630 nvme0n1 00:26:25.630 nvme1n1 00:26:25.630 nvme2n1 00:26:25.630 nvme2n2 00:26:25.630 nvme2n3 00:26:25.630 nvme3n1 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:26:25.630 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.630 16:41:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:25.891 16:41:01 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.891 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:26:25.891 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:26:25.891 16:41:01 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2604010b-5376-47e6-bba5-1669eb505ed0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2604010b-5376-47e6-bba5-1669eb505ed0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "dddf8cc1-dbd8-4bb6-bdcb-13847a8869a5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "dddf8cc1-dbd8-4bb6-bdcb-13847a8869a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "190e4a2e-7a0d-4832-9a94-ddea5926adb1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "190e4a2e-7a0d-4832-9a94-ddea5926adb1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "a88d35b6-a5ef-4dee-b0e6-034a3633e182"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a88d35b6-a5ef-4dee-b0e6-034a3633e182",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "1383a737-0003-48c3-9c83-db09ae180b3d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1383a737-0003-48c3-9c83-db09ae180b3d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "63578089-3174-4af0-9396-f24d141d9578"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "63578089-3174-4af0-9396-f24d141d9578",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:26:25.891 16:41:02 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:26:25.891 16:41:02 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:26:25.891 16:41:02 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:26:25.891 16:41:02 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 70709 00:26:25.891 16:41:02 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 70709 ']' 00:26:25.891 16:41:02 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 70709 00:26:25.891 16:41:02 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:26:25.891 16:41:02 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:25.891 16:41:02 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70709 00:26:25.891 killing process with pid 70709 00:26:25.891 16:41:02 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:25.891 16:41:02 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:25.891 16:41:02 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70709' 00:26:25.891 16:41:02 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 70709 00:26:25.891 16:41:02 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 70709 00:26:28.463 16:41:04 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:28.463 16:41:04 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:26:28.463 16:41:04 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:26:28.463 16:41:04 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:28.463 16:41:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:28.463 ************************************ 00:26:28.463 START TEST bdev_hello_world 00:26:28.463 ************************************ 00:26:28.463 16:41:04 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:26:28.463 [2024-10-17 16:41:04.604013] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:26:28.463 [2024-10-17 16:41:04.604143] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71087 ] 00:26:28.722 [2024-10-17 16:41:04.778467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.722 [2024-10-17 16:41:04.904035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.288 [2024-10-17 16:41:05.375986] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:26:29.288 [2024-10-17 16:41:05.376039] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:26:29.288 [2024-10-17 16:41:05.376058] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:26:29.288 [2024-10-17 16:41:05.378335] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:26:29.288 [2024-10-17 16:41:05.378845] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:26:29.288 [2024-10-17 16:41:05.379021] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:26:29.289 [2024-10-17 16:41:05.379223] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:26:29.289 00:26:29.289 [2024-10-17 16:41:05.379246] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:26:30.665 00:26:30.665 ************************************ 00:26:30.665 END TEST bdev_hello_world 00:26:30.665 ************************************ 00:26:30.665 real 0m2.037s 00:26:30.665 user 0m1.648s 00:26:30.665 sys 0m0.271s 00:26:30.665 16:41:06 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:30.665 16:41:06 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:26:30.665 16:41:06 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:26:30.665 16:41:06 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:30.665 16:41:06 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:30.665 16:41:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:30.665 ************************************ 00:26:30.665 START TEST bdev_bounds 00:26:30.665 ************************************ 00:26:30.665 16:41:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:26:30.665 16:41:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71129 00:26:30.665 Process bdevio pid: 71129 00:26:30.665 16:41:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:30.665 16:41:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:26:30.665 16:41:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71129' 00:26:30.665 16:41:06 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71129 00:26:30.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.665 16:41:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 71129 ']' 00:26:30.665 16:41:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.665 16:41:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.665 16:41:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.665 16:41:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.665 16:41:06 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:30.665 [2024-10-17 16:41:06.713244] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:26:30.665 [2024-10-17 16:41:06.713383] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71129 ] 00:26:30.665 [2024-10-17 16:41:06.896809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:30.924 [2024-10-17 16:41:07.027514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.924 [2024-10-17 16:41:07.027608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.924 [2024-10-17 16:41:07.027636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.567 16:41:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:31.567 16:41:07 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:26:31.567 16:41:07 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:26:31.567 I/O targets: 00:26:31.567 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:26:31.567 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:26:31.567 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:26:31.567 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:26:31.567 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:26:31.567 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:26:31.567 00:26:31.567 00:26:31.567 CUnit - A unit testing framework for C - Version 2.1-3 00:26:31.567 http://cunit.sourceforge.net/ 00:26:31.567 00:26:31.567 00:26:31.567 Suite: bdevio tests on: nvme3n1 00:26:31.567 Test: blockdev write read block ...passed 00:26:31.567 Test: blockdev write zeroes read block ...passed 00:26:31.567 Test: blockdev write zeroes read no split ...passed 00:26:31.567 Test: blockdev write zeroes read split ...passed 00:26:31.567 Test: blockdev write zeroes read split partial ...passed 00:26:31.567 Test: blockdev reset ...passed 00:26:31.567 Test: blockdev write read 8 blocks ...passed 00:26:31.567 Test: blockdev write read size > 128k ...passed 00:26:31.567 Test: blockdev write read invalid size ...passed 00:26:31.567 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:31.567 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:31.567 Test: blockdev write read max offset ...passed 00:26:31.567 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:31.567 Test: blockdev writev readv 8 blocks ...passed 00:26:31.567 Test: blockdev writev readv 30 x 1block ...passed 00:26:31.567 Test: blockdev writev readv block ...passed 00:26:31.567 Test: blockdev writev readv size > 128k ...passed 00:26:31.567 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:31.567 Test: blockdev comparev and writev ...passed 00:26:31.567 Test: blockdev nvme passthru rw ...passed 00:26:31.567 Test: blockdev nvme passthru vendor specific ...passed 00:26:31.567 Test: blockdev nvme admin passthru ...passed 00:26:31.567 Test: blockdev copy ...passed 00:26:31.567 Suite: bdevio tests on: nvme2n3 00:26:31.567 Test: blockdev write read block ...passed 00:26:31.567 Test: blockdev write zeroes read block ...passed 00:26:31.567 Test: blockdev write zeroes read no split ...passed 00:26:31.567 Test: blockdev write zeroes read split ...passed 00:26:31.826 Test: blockdev write zeroes read split partial ...passed 00:26:31.826 Test: blockdev reset ...passed 00:26:31.826 Test: blockdev write read 8 blocks ...passed 00:26:31.826 Test: blockdev write read size > 128k ...passed 00:26:31.826 Test: blockdev write read invalid size ...passed 00:26:31.826 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:31.826 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:31.826 Test: blockdev write read max offset ...passed 00:26:31.826 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:31.826 Test: blockdev writev readv 8 blocks ...passed 00:26:31.826 Test: blockdev writev readv 30 x 1block ...passed 00:26:31.826 Test: blockdev writev readv block ...passed 00:26:31.826 Test: blockdev writev readv size > 128k ...passed 00:26:31.826 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:31.826 Test: blockdev comparev and writev ...passed 00:26:31.826 Test: blockdev nvme passthru rw ...passed 00:26:31.826 Test: blockdev nvme passthru vendor specific ...passed 00:26:31.826 Test: blockdev nvme admin passthru ...passed 00:26:31.826 Test: blockdev copy ...passed 00:26:31.826 Suite: bdevio tests on: nvme2n2 00:26:31.826 Test: blockdev write read block ...passed 00:26:31.826 Test: blockdev write zeroes read block ...passed 00:26:31.826 Test: blockdev write zeroes read no split ...passed 00:26:31.826 Test: blockdev write zeroes read split ...passed 00:26:31.826 Test: blockdev write zeroes read split partial ...passed 00:26:31.826 Test: blockdev reset ...passed 00:26:31.826 Test: blockdev write read 8 blocks ...passed 00:26:31.826 Test: blockdev write read size > 128k ...passed 00:26:31.826 Test: blockdev write read invalid size ...passed 00:26:31.826 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:31.826 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:31.826 Test: blockdev write read max offset ...passed 00:26:31.826 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:31.826 Test: blockdev writev readv 8 blocks ...passed 00:26:31.826 Test: blockdev writev readv 30 x 1block ...passed 00:26:31.826 Test: blockdev writev readv block ...passed 00:26:31.826 Test: blockdev writev readv size > 128k ...passed 00:26:31.826 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:31.826 Test: blockdev comparev and writev ...passed 00:26:31.826 Test: blockdev nvme passthru rw ...passed 00:26:31.826 Test: blockdev nvme passthru vendor specific ...passed 00:26:31.826 Test: blockdev nvme admin passthru ...passed 00:26:31.826 Test: blockdev copy ...passed 00:26:31.826 Suite: bdevio tests on: nvme2n1 00:26:31.826 Test: blockdev write read block ...passed 00:26:31.826 Test: blockdev write zeroes read block ...passed 00:26:31.826 Test: blockdev write zeroes read no split ...passed 00:26:31.826 Test: blockdev write zeroes read split ...passed 00:26:31.826 Test: blockdev write zeroes read split partial ...passed 00:26:31.826 Test: blockdev reset ...passed 00:26:31.826 Test: blockdev write read 8 blocks ...passed 00:26:31.826 Test: blockdev write read size > 128k ...passed 00:26:31.826 Test: blockdev write read invalid size ...passed 00:26:31.826 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:31.826 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:31.826 Test: blockdev write read max offset ...passed 00:26:31.826 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:31.826 Test: blockdev writev readv 8 blocks ...passed 00:26:31.826 Test: blockdev writev readv 30 x 1block ...passed 00:26:31.826 Test: blockdev writev readv block ...passed 00:26:31.826 Test: blockdev writev readv size > 128k ...passed 00:26:31.826 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:31.826 Test: blockdev comparev and writev ...passed 00:26:31.826 Test: blockdev nvme passthru rw ...passed 00:26:31.826 Test: blockdev nvme passthru vendor specific ...passed 00:26:31.826 Test: blockdev nvme admin passthru ...passed 00:26:31.826 Test: blockdev copy ...passed 00:26:31.826 Suite: bdevio tests on: nvme1n1 00:26:31.826 Test: blockdev write read block ...passed 00:26:31.826 Test: blockdev write zeroes read block ...passed 00:26:31.826 Test: blockdev write zeroes read no split ...passed 00:26:31.826 Test: blockdev write zeroes read split ...passed 00:26:32.085 Test: blockdev write zeroes read split partial ...passed 00:26:32.085 Test: blockdev reset ...passed 00:26:32.085 Test: blockdev write read 8 blocks ...passed 00:26:32.085 Test: blockdev write read size > 128k ...passed 00:26:32.085 Test: blockdev write read invalid size ...passed 00:26:32.085 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:32.085 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:32.085 Test: blockdev write read max offset ...passed 00:26:32.085 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:32.085 Test: blockdev writev readv 8 blocks ...passed 00:26:32.085 Test: blockdev writev readv 30 x 1block ...passed 00:26:32.085 Test: blockdev writev readv block ...passed 00:26:32.085 Test: blockdev writev readv size > 128k ...passed 00:26:32.085 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:32.085 Test: blockdev comparev and writev ...passed 00:26:32.085 Test: blockdev nvme passthru rw ...passed 00:26:32.085 Test: blockdev nvme passthru vendor specific ...passed 00:26:32.085 Test: blockdev nvme admin passthru ...passed 00:26:32.085 Test: blockdev copy ...passed 00:26:32.085 Suite: bdevio tests on: nvme0n1 00:26:32.085 Test: blockdev write read block ...passed 00:26:32.085 Test: blockdev write zeroes read block ...passed 00:26:32.085 Test: blockdev write zeroes read no split ...passed 00:26:32.085 Test: blockdev write zeroes read split ...passed 00:26:32.085 Test: blockdev write zeroes read split partial ...passed 00:26:32.085 Test: blockdev reset ...passed 00:26:32.085 Test: blockdev write read 8 blocks ...passed 00:26:32.085 Test: blockdev write read size > 128k ...passed 00:26:32.085 Test: blockdev write read invalid size ...passed 00:26:32.085 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:32.085 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:32.085 Test: blockdev write read max offset ...passed 00:26:32.085 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:32.085 Test: blockdev writev readv 8 blocks ...passed 00:26:32.085 Test: blockdev writev readv 30 x 1block ...passed 00:26:32.085 Test: blockdev writev readv block ...passed 00:26:32.085 Test: blockdev writev readv size > 128k ...passed 00:26:32.085 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:32.085 Test: blockdev comparev and writev ...passed 00:26:32.085 Test: blockdev nvme passthru rw ...passed 00:26:32.085 Test: blockdev nvme passthru vendor specific ...passed 00:26:32.085 Test: blockdev nvme admin passthru ...passed 00:26:32.085 Test: blockdev copy ...passed 00:26:32.085 00:26:32.085 Run Summary: Type Total Ran Passed Failed Inactive 00:26:32.085 suites 6 6 n/a 0 0 00:26:32.085 tests 138 138 138 0 0 00:26:32.085 asserts 780 780 780 0 n/a 00:26:32.085 00:26:32.085 Elapsed time = 1.378 seconds 00:26:32.085 0 00:26:32.085 16:41:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71129 00:26:32.085 16:41:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 71129 ']' 00:26:32.085 16:41:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 71129 00:26:32.085 16:41:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:26:32.085 16:41:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:32.085 16:41:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71129 00:26:32.085 16:41:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:32.085 16:41:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:32.085 16:41:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71129' 00:26:32.085 killing process with pid 71129 00:26:32.085 16:41:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 71129 00:26:32.085 16:41:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 71129 00:26:33.463 16:41:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:26:33.463 00:26:33.463 real 0m2.865s 00:26:33.463 user 0m7.134s 00:26:33.463 sys 0m0.453s 00:26:33.463 16:41:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:33.463 ************************************ 00:26:33.463 END TEST bdev_bounds 00:26:33.463 ************************************ 00:26:33.463 16:41:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:26:33.463 16:41:09 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:26:33.463 16:41:09 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:33.463 16:41:09 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:33.463 16:41:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:33.463 ************************************ 00:26:33.463 START TEST bdev_nbd 00:26:33.463 ************************************ 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71195 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71195 /var/tmp/spdk-nbd.sock 00:26:33.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 71195 ']' 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:33.463 16:41:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:33.463 [2024-10-17 16:41:09.668806] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:26:33.464 [2024-10-17 16:41:09.669150] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:33.723 [2024-10-17 16:41:09.847944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.723 [2024-10-17 16:41:09.990737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:26:34.303 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:26:34.562 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:26:34.562 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:26:34.562 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:26:34.562 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:34.562 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:34.562 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:34.563 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:34.563 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:34.563 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:34.563 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:34.563 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:34.563 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:34.563 1+0 records in 00:26:34.563 1+0 records out 00:26:34.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594584 s, 6.9 MB/s 00:26:34.563 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:34.563 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:34.563 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:34.563 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:34.563 16:41:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:34.563 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:34.563 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:26:34.563 16:41:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:34.822 1+0 records in 00:26:34.822 1+0 records out 00:26:34.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502921 s, 8.1 MB/s 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:34.822 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:26:35.081 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:26:35.339 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:35.340 1+0 records in 00:26:35.340 1+0 records out 00:26:35.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504267 s, 8.1 MB/s 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:26:35.340 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:35.598 1+0 records in 00:26:35.598 1+0 records out 00:26:35.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636926 s, 6.4 MB/s 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:26:35.598 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:35.857 1+0 records in 00:26:35.857 1+0 records out 00:26:35.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000844803 s, 4.8 MB/s 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:35.857 16:41:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:35.857 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:35.857 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:35.857 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:35.857 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:26:35.857 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:26:36.116 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:26:36.116 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:26:36.116 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:26:36.116 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:26:36.116 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:36.116 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:36.117 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:36.117 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:26:36.117 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:36.117 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:36.117 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:36.117 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:36.117 1+0 records in 00:26:36.117 1+0 records out 00:26:36.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655499 s, 6.2 MB/s 00:26:36.117 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:36.117 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:36.117 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:36.117 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:36.117 16:41:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:36.117 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:26:36.117 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:26:36.117 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:36.375 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:26:36.375 { 00:26:36.375 "nbd_device": "/dev/nbd0", 00:26:36.375 "bdev_name": "nvme0n1" 00:26:36.375 }, 00:26:36.375 { 00:26:36.375 "nbd_device": "/dev/nbd1", 00:26:36.375 "bdev_name": "nvme1n1" 00:26:36.375 }, 00:26:36.375 { 00:26:36.375 "nbd_device": "/dev/nbd2", 00:26:36.375 "bdev_name": "nvme2n1" 00:26:36.375 }, 00:26:36.375 { 00:26:36.375 "nbd_device": "/dev/nbd3", 00:26:36.375 "bdev_name": "nvme2n2" 00:26:36.375 }, 00:26:36.375 { 00:26:36.375 "nbd_device": "/dev/nbd4", 00:26:36.375 "bdev_name": "nvme2n3" 00:26:36.375 }, 00:26:36.375 { 00:26:36.375 "nbd_device": "/dev/nbd5", 00:26:36.375 "bdev_name": "nvme3n1" 00:26:36.375 } 00:26:36.375 ]' 00:26:36.375 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:26:36.375 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:26:36.375 { 00:26:36.375 "nbd_device": "/dev/nbd0", 00:26:36.375 "bdev_name": "nvme0n1" 00:26:36.375 }, 00:26:36.375 { 00:26:36.375 "nbd_device": "/dev/nbd1", 00:26:36.375 "bdev_name": "nvme1n1" 00:26:36.375 }, 00:26:36.375 { 00:26:36.375 "nbd_device": "/dev/nbd2", 00:26:36.375 "bdev_name": "nvme2n1" 00:26:36.375 }, 00:26:36.375 { 00:26:36.375 "nbd_device": "/dev/nbd3", 00:26:36.375 "bdev_name": "nvme2n2" 00:26:36.375 }, 00:26:36.375 { 00:26:36.375 "nbd_device": "/dev/nbd4", 00:26:36.375 "bdev_name": "nvme2n3" 00:26:36.375 }, 00:26:36.375 { 00:26:36.375 "nbd_device": "/dev/nbd5", 00:26:36.375 "bdev_name": "nvme3n1" 00:26:36.375 } 00:26:36.375 ]' 00:26:36.375 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:26:36.375 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:26:36.375 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:36.375 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:26:36.375 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:36.375 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:36.375 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:36.375 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:36.634 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:36.634 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:36.634 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:36.634 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:36.634 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:36.634 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:36.634 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:36.634 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:36.634 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:36.634 16:41:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:36.893 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:36.893 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:36.893 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:36.893 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:36.893 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:36.893 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:36.893 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:36.893 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:36.893 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:36.893 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:26:37.151 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:26:37.151 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:26:37.151 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:26:37.151 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:37.151 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:37.151 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:26:37.151 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:37.151 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:37.151 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:37.151 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:26:37.409 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:26:37.409 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:26:37.409 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:26:37.409 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:37.409 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:37.409 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:26:37.409 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:37.409 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:37.409 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:37.409 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:26:37.409 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:26:37.668 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:26:37.668 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:26:37.668 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:37.668 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:37.668 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:26:37.668 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:37.668 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:37.668 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:37.668 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:26:37.928 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:26:37.928 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:26:37.928 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:26:37.928 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:37.928 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:37.928 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:26:37.928 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:37.928 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:37.928 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:37.928 16:41:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:37.928 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:26:38.187 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:26:38.446 /dev/nbd0 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:38.446 1+0 records in 00:26:38.446 1+0 records out 00:26:38.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640799 s, 6.4 MB/s 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:26:38.446 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:26:38.705 /dev/nbd1 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:38.705 1+0 records in 00:26:38.705 1+0 records out 00:26:38.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0006527 s, 6.3 MB/s 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:26:38.705 16:41:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:26:38.964 /dev/nbd10 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:38.964 1+0 records in 00:26:38.964 1+0 records out 00:26:38.964 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000685472 s, 6.0 MB/s 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:26:38.964 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:26:39.223 /dev/nbd11 00:26:39.223 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:26:39.223 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:26:39.223 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:26:39.223 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:39.223 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:39.223 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:39.223 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:26:39.223 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:39.224 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:39.224 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:39.224 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:39.224 1+0 records in 00:26:39.224 1+0 records out 00:26:39.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586304 s, 7.0 MB/s 00:26:39.224 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:39.224 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:39.224 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:39.224 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:39.224 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:39.224 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:39.224 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:26:39.224 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:26:39.482 /dev/nbd12 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:39.482 1+0 records in 00:26:39.482 1+0 records out 00:26:39.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000795546 s, 5.1 MB/s 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:26:39.482 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:26:39.741 /dev/nbd13 00:26:39.741 16:41:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:39.741 1+0 records in 00:26:39.741 1+0 records out 00:26:39.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538992 s, 7.6 MB/s 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:39.741 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:40.001 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:40.001 { 00:26:40.001 "nbd_device": "/dev/nbd0", 00:26:40.001 "bdev_name": "nvme0n1" 00:26:40.001 }, 00:26:40.001 { 00:26:40.001 "nbd_device": "/dev/nbd1", 00:26:40.001 "bdev_name": "nvme1n1" 00:26:40.001 }, 00:26:40.001 { 00:26:40.001 "nbd_device": "/dev/nbd10", 00:26:40.001 "bdev_name": "nvme2n1" 00:26:40.001 }, 00:26:40.001 { 00:26:40.001 "nbd_device": "/dev/nbd11", 00:26:40.001 "bdev_name": "nvme2n2" 00:26:40.001 }, 00:26:40.001 { 00:26:40.001 "nbd_device": "/dev/nbd12", 00:26:40.001 "bdev_name": "nvme2n3" 00:26:40.001 }, 00:26:40.001 { 00:26:40.001 "nbd_device": "/dev/nbd13", 00:26:40.001 "bdev_name": "nvme3n1" 00:26:40.001 } 00:26:40.001 ]' 00:26:40.001 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:40.001 { 00:26:40.001 "nbd_device": "/dev/nbd0", 00:26:40.001 "bdev_name": "nvme0n1" 00:26:40.001 }, 00:26:40.001 { 00:26:40.001 "nbd_device": "/dev/nbd1", 00:26:40.001 "bdev_name": "nvme1n1" 00:26:40.001 }, 00:26:40.001 { 00:26:40.001 "nbd_device": "/dev/nbd10", 00:26:40.001 "bdev_name": "nvme2n1" 00:26:40.001 }, 00:26:40.001 { 00:26:40.001 "nbd_device": "/dev/nbd11", 00:26:40.001 "bdev_name": "nvme2n2" 00:26:40.001 }, 00:26:40.001 { 00:26:40.001 "nbd_device": "/dev/nbd12", 00:26:40.001 "bdev_name": "nvme2n3" 00:26:40.001 }, 00:26:40.001 { 00:26:40.001 "nbd_device": "/dev/nbd13", 00:26:40.001 "bdev_name": "nvme3n1" 00:26:40.001 } 00:26:40.001 ]' 00:26:40.001 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:40.259 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:26:40.259 /dev/nbd1 00:26:40.259 /dev/nbd10 00:26:40.260 /dev/nbd11 00:26:40.260 /dev/nbd12 00:26:40.260 /dev/nbd13' 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:26:40.260 /dev/nbd1 00:26:40.260 /dev/nbd10 00:26:40.260 /dev/nbd11 00:26:40.260 /dev/nbd12 00:26:40.260 /dev/nbd13' 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:26:40.260 256+0 records in 00:26:40.260 256+0 records out 00:26:40.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127298 s, 82.4 MB/s 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:26:40.260 256+0 records in 00:26:40.260 256+0 records out 00:26:40.260 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.119619 s, 8.8 MB/s 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:40.260 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:26:40.518 256+0 records in 00:26:40.518 256+0 records out 00:26:40.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145562 s, 7.2 MB/s 00:26:40.518 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:40.518 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:26:40.518 256+0 records in 00:26:40.518 256+0 records out 00:26:40.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122318 s, 8.6 MB/s 00:26:40.518 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:40.518 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:26:40.776 256+0 records in 00:26:40.776 256+0 records out 00:26:40.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122447 s, 8.6 MB/s 00:26:40.776 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:40.776 16:41:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:26:40.776 256+0 records in 00:26:40.776 256+0 records out 00:26:40.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121032 s, 8.7 MB/s 00:26:40.776 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:26:40.776 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:26:41.035 256+0 records in 00:26:41.035 256+0 records out 00:26:41.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120325 s, 8.7 MB/s 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:41.035 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:41.293 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:41.293 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:41.293 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:41.293 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:41.293 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:41.293 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:41.293 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:41.293 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:41.293 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:41.293 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:26:41.552 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:41.552 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:41.552 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:41.552 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:41.552 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:41.552 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:41.552 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:41.552 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:41.552 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:41.552 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:26:41.812 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:26:41.812 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:26:41.812 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:26:41.812 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:41.812 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:41.812 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:26:41.812 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:41.812 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:41.812 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:41.812 16:41:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:26:42.116 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:26:42.116 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:26:42.116 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:26:42.116 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:42.116 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:42.116 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:26:42.397 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:42.397 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:42.397 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:42.397 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:26:42.397 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:26:42.397 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:26:42.397 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:26:42.397 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:42.397 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:42.397 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:26:42.397 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:42.397 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:42.397 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:42.397 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:26:42.656 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:26:42.656 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:26:42.656 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:26:42.656 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:42.656 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:42.656 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:26:42.656 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:42.656 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:42.656 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:26:42.656 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:42.656 16:41:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:26:42.914 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:26:43.173 malloc_lvol_verify 00:26:43.173 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:26:43.431 b13d431a-899a-4e1f-ae30-f9192dadb114 00:26:43.431 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:26:43.689 fd025d97-6e00-4089-ad02-5dc2b2a63dc1 00:26:43.689 16:41:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:26:43.947 /dev/nbd0 00:26:43.947 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:26:43.947 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:26:43.947 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:26:43.947 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:26:43.947 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:26:43.947 mke2fs 1.47.0 (5-Feb-2023) 00:26:43.947 Discarding device blocks: 0/4096 done 00:26:43.947 Creating filesystem with 4096 1k blocks and 1024 inodes 00:26:43.947 00:26:43.947 Allocating group tables: 0/1 done 00:26:43.947 Writing inode tables: 0/1 done 00:26:43.947 Creating journal (1024 blocks): done 00:26:43.947 Writing superblocks and filesystem accounting information: 0/1 done 00:26:43.947 00:26:43.948 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:26:43.948 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:26:43.948 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:43.948 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:43.948 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:26:43.948 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:43.948 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:26:44.206 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:44.206 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:44.206 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:44.206 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:44.206 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:44.206 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:44.206 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:26:44.206 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:26:44.206 16:41:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71195 00:26:44.206 16:41:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 71195 ']' 00:26:44.206 16:41:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 71195 00:26:44.206 16:41:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:26:44.206 16:41:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:44.206 16:41:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71195 00:26:44.466 16:41:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:44.466 killing process with pid 71195 00:26:44.466 16:41:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:44.466 16:41:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71195' 00:26:44.466 16:41:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 71195 00:26:44.466 16:41:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 71195 00:26:45.894 16:41:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:26:45.894 00:26:45.894 real 0m12.218s 00:26:45.894 user 0m16.069s 00:26:45.894 sys 0m5.169s 00:26:45.894 16:41:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:45.894 16:41:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:26:45.894 ************************************ 00:26:45.894 END TEST bdev_nbd 00:26:45.894 ************************************ 00:26:45.894 16:41:21 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:26:45.894 16:41:21 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:26:45.894 16:41:21 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:26:45.894 16:41:21 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:26:45.894 16:41:21 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:45.894 16:41:21 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:45.894 16:41:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:45.894 ************************************ 00:26:45.894 START TEST bdev_fio 00:26:45.894 ************************************ 00:26:45.894 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:26:45.894 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:26:45.894 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:26:45.894 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:26:45.894 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:26:45.894 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:26:45.894 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:26:45.894 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:26:45.894 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:45.895 ************************************ 00:26:45.895 START TEST bdev_fio_rw_verify 00:26:45.895 ************************************ 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:26:45.895 16:41:21 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:45.895 16:41:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:45.895 16:41:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:26:45.895 16:41:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:45.895 16:41:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:26:46.154 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:46.154 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:46.154 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:46.154 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:46.154 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:46.154 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:26:46.154 fio-3.35 00:26:46.154 Starting 6 threads 00:26:58.358 00:26:58.358 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=71617: Thu Oct 17 16:41:33 2024 00:26:58.358 read: IOPS=31.9k, BW=125MiB/s (131MB/s)(1245MiB/10001msec) 00:26:58.358 slat (usec): min=2, max=4765, avg= 6.30, stdev=10.36 00:26:58.359 clat (usec): min=97, max=25334, avg=558.93, stdev=343.93 00:26:58.359 lat (usec): min=103, max=25342, avg=565.24, stdev=344.74 00:26:58.359 clat percentiles (usec): 00:26:58.359 | 50.000th=[ 545], 99.000th=[ 1336], 99.900th=[ 2704], 99.990th=[15664], 00:26:58.359 | 99.999th=[25297] 00:26:58.359 write: IOPS=32.2k, BW=126MiB/s (132MB/s)(1258MiB/10001msec); 0 zone resets 00:26:58.359 slat (usec): min=11, max=2051, avg=27.42, stdev=38.42 00:26:58.359 clat (usec): min=71, max=25335, avg=667.84, stdev=365.08 00:26:58.359 lat (usec): min=98, max=25354, avg=695.27, stdev=371.15 00:26:58.359 clat percentiles (usec): 00:26:58.359 | 50.000th=[ 635], 99.000th=[ 1713], 99.900th=[ 2671], 99.990th=[ 5080], 00:26:58.359 | 99.999th=[25297] 00:26:58.359 bw ( KiB/s): min=100440, max=167424, per=100.00%, avg=129472.74, stdev=3006.27, samples=114 00:26:58.359 iops : min=25110, max=41856, avg=32367.84, stdev=751.57, samples=114 00:26:58.359 lat (usec) : 100=0.01%, 250=6.44%, 500=29.50%, 750=40.15%, 1000=16.67% 00:26:58.359 lat (msec) : 2=6.91%, 4=0.30%, 10=0.01%, 20=0.01%, 50=0.01% 00:26:58.359 cpu : usr=52.77%, sys=31.52%, ctx=8209, majf=0, minf=26670 00:26:58.359 IO depths : 1=11.9%, 2=24.3%, 4=50.7%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:58.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:58.359 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:58.359 issued rwts: total=318821,322171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:58.359 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:58.359 00:26:58.359 Run status group 0 (all jobs): 00:26:58.359 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=1245MiB (1306MB), run=10001-10001msec 00:26:58.359 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=1258MiB (1320MB), run=10001-10001msec 00:26:58.359 ----------------------------------------------------- 00:26:58.359 Suppressions used: 00:26:58.359 count bytes template 00:26:58.359 6 48 /usr/src/fio/parse.c 00:26:58.359 3114 298944 /usr/src/fio/iolog.c 00:26:58.359 1 8 libtcmalloc_minimal.so 00:26:58.359 1 904 libcrypto.so 00:26:58.359 ----------------------------------------------------- 00:26:58.359 00:26:58.359 00:26:58.359 real 0m12.576s 00:26:58.359 user 0m33.763s 00:26:58.359 sys 0m19.306s 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:26:58.359 ************************************ 00:26:58.359 END TEST bdev_fio_rw_verify 00:26:58.359 ************************************ 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:26:58.359 16:41:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "2604010b-5376-47e6-bba5-1669eb505ed0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2604010b-5376-47e6-bba5-1669eb505ed0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "dddf8cc1-dbd8-4bb6-bdcb-13847a8869a5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "dddf8cc1-dbd8-4bb6-bdcb-13847a8869a5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "190e4a2e-7a0d-4832-9a94-ddea5926adb1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "190e4a2e-7a0d-4832-9a94-ddea5926adb1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "a88d35b6-a5ef-4dee-b0e6-034a3633e182"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a88d35b6-a5ef-4dee-b0e6-034a3633e182",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "1383a737-0003-48c3-9c83-db09ae180b3d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1383a737-0003-48c3-9c83-db09ae180b3d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "63578089-3174-4af0-9396-f24d141d9578"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "63578089-3174-4af0-9396-f24d141d9578",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:26:58.624 16:41:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:26:58.624 16:41:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:26:58.624 16:41:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:26:58.624 /home/vagrant/spdk_repo/spdk 00:26:58.625 16:41:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:26:58.625 16:41:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:26:58.625 00:26:58.625 real 0m12.811s 00:26:58.625 user 0m33.879s 00:26:58.625 sys 0m19.434s 00:26:58.625 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:58.625 ************************************ 00:26:58.625 END TEST bdev_fio 00:26:58.625 ************************************ 00:26:58.625 16:41:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:26:58.625 16:41:34 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:58.625 16:41:34 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:58.625 16:41:34 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:26:58.625 16:41:34 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:58.625 16:41:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:26:58.625 ************************************ 00:26:58.625 START TEST bdev_verify 00:26:58.625 ************************************ 00:26:58.625 16:41:34 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:26:58.625 [2024-10-17 16:41:34.823375] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:26:58.625 [2024-10-17 16:41:34.823499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71794 ] 00:26:58.884 [2024-10-17 16:41:34.995481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:58.884 [2024-10-17 16:41:35.115482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.884 [2024-10-17 16:41:35.115515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.451 Running I/O for 5 seconds... 00:27:01.763 21728.00 IOPS, 84.88 MiB/s [2024-10-17T16:41:39.000Z] 21888.00 IOPS, 85.50 MiB/s [2024-10-17T16:41:39.936Z] 22538.67 IOPS, 88.04 MiB/s [2024-10-17T16:41:40.873Z] 22632.00 IOPS, 88.41 MiB/s [2024-10-17T16:41:40.873Z] 22796.80 IOPS, 89.05 MiB/s 00:27:04.574 Latency(us) 00:27:04.574 [2024-10-17T16:41:40.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.574 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:04.574 Verification LBA range: start 0x0 length 0xa0000 00:27:04.574 nvme0n1 : 5.07 1768.16 6.91 0.00 0.00 72273.96 6237.76 69905.07 00:27:04.574 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:04.574 Verification LBA range: start 0xa0000 length 0xa0000 00:27:04.574 nvme0n1 : 5.06 1620.51 6.33 0.00 0.00 78851.89 16107.64 74958.44 00:27:04.574 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:04.574 Verification LBA range: start 0x0 length 0xbd0bd 00:27:04.574 nvme1n1 : 5.07 2739.65 10.70 0.00 0.00 46430.96 4737.54 61903.88 00:27:04.574 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:04.574 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:27:04.574 nvme1n1 : 5.07 2728.30 10.66 0.00 0.00 46666.94 6106.17 61482.77 00:27:04.574 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:04.574 Verification LBA range: start 0x0 length 0x80000 00:27:04.574 nvme2n1 : 5.06 1770.56 6.92 0.00 0.00 71760.31 10422.59 77485.13 00:27:04.574 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:04.574 Verification LBA range: start 0x80000 length 0x80000 00:27:04.574 nvme2n1 : 5.07 1641.29 6.41 0.00 0.00 77500.60 10896.35 69483.95 00:27:04.574 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:04.574 Verification LBA range: start 0x0 length 0x80000 00:27:04.574 nvme2n2 : 5.08 1788.54 6.99 0.00 0.00 70904.79 8422.30 77485.13 00:27:04.574 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:04.574 Verification LBA range: start 0x80000 length 0x80000 00:27:04.574 nvme2n2 : 5.06 1643.35 6.42 0.00 0.00 77268.03 9422.44 68641.72 00:27:04.574 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:04.574 Verification LBA range: start 0x0 length 0x80000 00:27:04.574 nvme2n3 : 5.07 1765.55 6.90 0.00 0.00 71708.15 11264.82 69483.95 00:27:04.574 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:04.574 Verification LBA range: start 0x80000 length 0x80000 00:27:04.574 nvme2n3 : 5.06 1642.73 6.42 0.00 0.00 77160.77 10106.76 79590.71 00:27:04.574 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:04.574 Verification LBA range: start 0x0 length 0x20000 00:27:04.574 nvme3n1 : 5.08 1765.01 6.89 0.00 0.00 71633.63 11475.38 69905.07 00:27:04.574 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:04.574 Verification LBA range: start 0x20000 length 0x20000 00:27:04.574 nvme3n1 : 5.07 1640.17 6.41 0.00 0.00 77154.33 8632.85 73273.99 00:27:04.574 [2024-10-17T16:41:40.873Z] =================================================================================================================== 00:27:04.574 [2024-10-17T16:41:40.873Z] Total : 22513.83 87.94 0.00 0.00 67712.08 4737.54 79590.71 00:27:05.952 00:27:05.952 real 0m7.270s 00:27:05.952 user 0m11.098s 00:27:05.952 sys 0m2.153s 00:27:05.952 16:41:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:05.952 16:41:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:27:05.952 ************************************ 00:27:05.952 END TEST bdev_verify 00:27:05.952 ************************************ 00:27:05.952 16:41:42 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:05.952 16:41:42 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:27:05.952 16:41:42 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:05.952 16:41:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:05.952 ************************************ 00:27:05.952 START TEST bdev_verify_big_io 00:27:05.952 ************************************ 00:27:05.952 16:41:42 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:05.952 [2024-10-17 16:41:42.171360] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:27:05.952 [2024-10-17 16:41:42.171498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71902 ] 00:27:06.211 [2024-10-17 16:41:42.348862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:06.211 [2024-10-17 16:41:42.494411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.211 [2024-10-17 16:41:42.494436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.148 Running I/O for 5 seconds... 00:27:12.217 1568.00 IOPS, 98.00 MiB/s [2024-10-17T16:41:49.105Z] 3380.00 IOPS, 211.25 MiB/s [2024-10-17T16:41:49.105Z] 3577.33 IOPS, 223.58 MiB/s 00:27:12.806 Latency(us) 00:27:12.806 [2024-10-17T16:41:49.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.806 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:12.806 Verification LBA range: start 0x0 length 0xa000 00:27:12.806 nvme0n1 : 5.77 119.29 7.46 0.00 0.00 1034146.47 5658.73 1428421.60 00:27:12.806 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:12.806 Verification LBA range: start 0xa000 length 0xa000 00:27:12.806 nvme0n1 : 5.78 118.97 7.44 0.00 0.00 1037030.84 128018.92 1980924.30 00:27:12.806 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:12.806 Verification LBA range: start 0x0 length 0xbd0b 00:27:12.806 nvme1n1 : 5.77 163.61 10.23 0.00 0.00 733982.46 22845.48 1259975.66 00:27:12.806 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:12.806 Verification LBA range: start 0xbd0b length 0xbd0b 00:27:12.806 nvme1n1 : 5.81 170.81 10.68 0.00 0.00 702033.36 10264.67 1030889.18 00:27:12.806 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:12.806 Verification LBA range: start 0x0 length 0x8000 00:27:12.806 nvme2n1 : 5.77 174.63 10.91 0.00 0.00 665335.47 40005.91 1320616.20 00:27:12.806 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:12.806 Verification LBA range: start 0x8000 length 0x8000 00:27:12.806 nvme2n1 : 5.81 165.19 10.32 0.00 0.00 705349.40 117069.93 950035.12 00:27:12.806 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:12.806 Verification LBA range: start 0x0 length 0x8000 00:27:12.806 nvme2n2 : 5.79 141.00 8.81 0.00 0.00 817834.47 54323.82 1569916.20 00:27:12.806 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:12.806 Verification LBA range: start 0x8000 length 0x8000 00:27:12.806 nvme2n2 : 5.82 197.78 12.36 0.00 0.00 588287.95 29899.16 724317.56 00:27:12.806 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:12.806 Verification LBA range: start 0x0 length 0x8000 00:27:12.806 nvme2n3 : 5.80 162.85 10.18 0.00 0.00 696684.13 18318.50 1630556.74 00:27:12.806 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:12.806 Verification LBA range: start 0x8000 length 0x8000 00:27:12.806 nvme2n3 : 5.83 117.98 7.37 0.00 0.00 963935.89 31583.61 2856843.21 00:27:12.806 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:12.806 Verification LBA range: start 0x0 length 0x2000 00:27:12.806 nvme3n1 : 5.80 223.54 13.97 0.00 0.00 493307.67 7527.43 643463.51 00:27:12.806 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:12.806 Verification LBA range: start 0x2000 length 0x2000 00:27:12.806 nvme3n1 : 5.82 161.57 10.10 0.00 0.00 687781.53 12580.81 1010675.66 00:27:12.806 [2024-10-17T16:41:49.105Z] =================================================================================================================== 00:27:12.806 [2024-10-17T16:41:49.105Z] Total : 1917.21 119.83 0.00 0.00 729922.62 5658.73 2856843.21 00:27:14.709 00:27:14.709 real 0m8.536s 00:27:14.709 user 0m15.290s 00:27:14.709 sys 0m0.717s 00:27:14.709 16:41:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:14.709 16:41:50 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:27:14.709 ************************************ 00:27:14.709 END TEST bdev_verify_big_io 00:27:14.709 ************************************ 00:27:14.709 16:41:50 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:14.709 16:41:50 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:27:14.709 16:41:50 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:14.709 16:41:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:14.710 ************************************ 00:27:14.710 START TEST bdev_write_zeroes 00:27:14.710 ************************************ 00:27:14.710 16:41:50 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:14.710 [2024-10-17 16:41:50.781959] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:27:14.710 [2024-10-17 16:41:50.782096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72017 ] 00:27:14.710 [2024-10-17 16:41:50.956082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.968 [2024-10-17 16:41:51.118266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.534 Running I/O for 1 seconds... 00:27:16.469 58390.00 IOPS, 228.09 MiB/s 00:27:16.469 Latency(us) 00:27:16.469 [2024-10-17T16:41:52.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.469 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:16.469 nvme0n1 : 1.03 8437.43 32.96 0.00 0.00 15156.14 8790.77 42111.49 00:27:16.469 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:16.469 nvme1n1 : 1.04 15104.94 59.00 0.00 0.00 8423.86 3947.95 38532.01 00:27:16.469 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:16.469 nvme2n1 : 1.04 8395.91 32.80 0.00 0.00 15143.39 7895.90 38321.45 00:27:16.469 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:16.469 nvme2n2 : 1.04 8384.09 32.75 0.00 0.00 15150.78 7843.26 37900.34 00:27:16.469 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:16.469 nvme2n3 : 1.04 8373.38 32.71 0.00 0.00 15157.79 7843.26 37058.11 00:27:16.469 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:16.469 nvme3n1 : 1.04 8363.71 32.67 0.00 0.00 15165.03 7685.35 36215.88 00:27:16.469 [2024-10-17T16:41:52.768Z] =================================================================================================================== 00:27:16.469 [2024-10-17T16:41:52.768Z] Total : 57059.45 222.89 0.00 0.00 13367.37 3947.95 42111.49 00:27:17.844 00:27:17.844 real 0m3.332s 00:27:17.844 user 0m2.461s 00:27:17.844 sys 0m0.702s 00:27:17.844 16:41:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:17.844 ************************************ 00:27:17.844 END TEST bdev_write_zeroes 00:27:17.844 ************************************ 00:27:17.844 16:41:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:27:17.844 16:41:54 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:17.844 16:41:54 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:27:17.844 16:41:54 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:17.844 16:41:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:17.844 ************************************ 00:27:17.844 START TEST bdev_json_nonenclosed 00:27:17.844 ************************************ 00:27:17.844 16:41:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:18.102 [2024-10-17 16:41:54.183187] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:27:18.102 [2024-10-17 16:41:54.183335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72076 ] 00:27:18.102 [2024-10-17 16:41:54.356350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.362 [2024-10-17 16:41:54.482308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.362 [2024-10-17 16:41:54.482416] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:18.362 [2024-10-17 16:41:54.482437] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:18.362 [2024-10-17 16:41:54.482451] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:18.621 00:27:18.621 real 0m0.664s 00:27:18.621 user 0m0.416s 00:27:18.621 sys 0m0.143s 00:27:18.621 16:41:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:18.621 16:41:54 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:27:18.621 ************************************ 00:27:18.621 END TEST bdev_json_nonenclosed 00:27:18.621 ************************************ 00:27:18.621 16:41:54 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:18.621 16:41:54 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:27:18.621 16:41:54 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:18.621 16:41:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:18.621 ************************************ 00:27:18.621 START TEST bdev_json_nonarray 00:27:18.621 ************************************ 00:27:18.621 16:41:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:18.880 [2024-10-17 16:41:54.924654] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:27:18.880 [2024-10-17 16:41:54.924815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72102 ] 00:27:18.880 [2024-10-17 16:41:55.099131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.138 [2024-10-17 16:41:55.224621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.138 [2024-10-17 16:41:55.224747] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:19.138 [2024-10-17 16:41:55.224788] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:27:19.138 [2024-10-17 16:41:55.224803] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:19.444 00:27:19.444 real 0m0.672s 00:27:19.444 user 0m0.422s 00:27:19.444 sys 0m0.145s 00:27:19.444 16:41:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:19.444 16:41:55 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:27:19.444 ************************************ 00:27:19.444 END TEST bdev_json_nonarray 00:27:19.444 ************************************ 00:27:19.444 16:41:55 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:27:19.444 16:41:55 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:27:19.444 16:41:55 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:27:19.444 16:41:55 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:27:19.444 16:41:55 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:27:19.444 16:41:55 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:27:19.444 16:41:55 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:19.444 16:41:55 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:27:19.444 16:41:55 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:27:19.444 16:41:55 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:27:19.444 16:41:55 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:27:19.444 16:41:55 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:20.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:24.215 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:24.473 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:27:24.473 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:24.473 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:27:24.732 00:27:24.732 real 1m6.975s 00:27:24.732 user 1m40.132s 00:27:24.732 sys 0m36.484s 00:27:24.732 16:42:00 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:24.732 ************************************ 00:27:24.732 END TEST blockdev_xnvme 00:27:24.732 ************************************ 00:27:24.732 16:42:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:27:24.732 16:42:00 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:27:24.732 16:42:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:24.732 16:42:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:24.732 16:42:00 -- common/autotest_common.sh@10 -- # set +x 00:27:24.732 ************************************ 00:27:24.732 START TEST ublk 00:27:24.732 ************************************ 00:27:24.732 16:42:00 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:27:24.732 * Looking for test storage... 00:27:24.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:27:24.732 16:42:01 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:24.991 16:42:01 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:27:24.991 16:42:01 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:24.991 16:42:01 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:24.991 16:42:01 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:24.991 16:42:01 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:24.991 16:42:01 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:24.991 16:42:01 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:27:24.991 16:42:01 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:27:24.991 16:42:01 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:27:24.991 16:42:01 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:27:24.991 16:42:01 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:27:24.991 16:42:01 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:27:24.991 16:42:01 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:27:24.991 16:42:01 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:24.991 16:42:01 ublk -- scripts/common.sh@344 -- # case "$op" in 00:27:24.991 16:42:01 ublk -- scripts/common.sh@345 -- # : 1 00:27:24.991 16:42:01 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:24.991 16:42:01 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:24.991 16:42:01 ublk -- scripts/common.sh@365 -- # decimal 1 00:27:24.991 16:42:01 ublk -- scripts/common.sh@353 -- # local d=1 00:27:24.991 16:42:01 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:24.991 16:42:01 ublk -- scripts/common.sh@355 -- # echo 1 00:27:24.991 16:42:01 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:27:24.991 16:42:01 ublk -- scripts/common.sh@366 -- # decimal 2 00:27:24.991 16:42:01 ublk -- scripts/common.sh@353 -- # local d=2 00:27:24.991 16:42:01 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:24.991 16:42:01 ublk -- scripts/common.sh@355 -- # echo 2 00:27:24.991 16:42:01 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:27:24.991 16:42:01 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:24.991 16:42:01 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:24.991 16:42:01 ublk -- scripts/common.sh@368 -- # return 0 00:27:24.991 16:42:01 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:24.991 16:42:01 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:24.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.991 --rc genhtml_branch_coverage=1 00:27:24.991 --rc genhtml_function_coverage=1 00:27:24.991 --rc genhtml_legend=1 00:27:24.991 --rc geninfo_all_blocks=1 00:27:24.991 --rc geninfo_unexecuted_blocks=1 00:27:24.991 00:27:24.991 ' 00:27:24.991 16:42:01 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:24.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.991 --rc genhtml_branch_coverage=1 00:27:24.991 --rc genhtml_function_coverage=1 00:27:24.991 --rc genhtml_legend=1 00:27:24.991 --rc geninfo_all_blocks=1 00:27:24.991 --rc geninfo_unexecuted_blocks=1 00:27:24.991 00:27:24.991 ' 00:27:24.991 16:42:01 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:24.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.991 --rc genhtml_branch_coverage=1 00:27:24.991 --rc genhtml_function_coverage=1 00:27:24.991 --rc genhtml_legend=1 00:27:24.991 --rc geninfo_all_blocks=1 00:27:24.991 --rc geninfo_unexecuted_blocks=1 00:27:24.991 00:27:24.991 ' 00:27:24.991 16:42:01 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:24.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:24.991 --rc genhtml_branch_coverage=1 00:27:24.991 --rc genhtml_function_coverage=1 00:27:24.991 --rc genhtml_legend=1 00:27:24.991 --rc geninfo_all_blocks=1 00:27:24.991 --rc geninfo_unexecuted_blocks=1 00:27:24.991 00:27:24.991 ' 00:27:24.991 16:42:01 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:27:24.991 16:42:01 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:27:24.991 16:42:01 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:27:24.991 16:42:01 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:27:24.991 16:42:01 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:27:24.991 16:42:01 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:27:24.991 16:42:01 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:27:24.991 16:42:01 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:27:24.991 16:42:01 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:27:24.991 16:42:01 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:27:24.991 16:42:01 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:27:24.991 16:42:01 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:27:24.991 16:42:01 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:27:24.991 16:42:01 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:27:24.991 16:42:01 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:27:24.991 16:42:01 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:27:24.991 16:42:01 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:27:24.991 16:42:01 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:27:24.991 16:42:01 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:27:24.991 16:42:01 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:27:24.991 16:42:01 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:24.991 16:42:01 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:24.991 16:42:01 ublk -- common/autotest_common.sh@10 -- # set +x 00:27:24.991 ************************************ 00:27:24.991 START TEST test_save_ublk_config 00:27:24.991 ************************************ 00:27:24.991 16:42:01 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:27:24.991 16:42:01 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:27:24.991 16:42:01 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72413 00:27:24.991 16:42:01 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:27:24.991 16:42:01 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:27:24.991 16:42:01 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72413 00:27:24.991 16:42:01 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 72413 ']' 00:27:24.991 16:42:01 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.991 16:42:01 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:24.991 16:42:01 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.991 16:42:01 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:24.991 16:42:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:27:25.250 [2024-10-17 16:42:01.301504] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:27:25.250 [2024-10-17 16:42:01.301647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72413 ] 00:27:25.250 [2024-10-17 16:42:01.478415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.509 [2024-10-17 16:42:01.605121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.444 16:42:02 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:26.444 16:42:02 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:27:26.444 16:42:02 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:27:26.444 16:42:02 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:27:26.444 16:42:02 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.444 16:42:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:27:26.444 [2024-10-17 16:42:02.562735] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:27:26.444 [2024-10-17 16:42:02.564116] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:27:26.444 malloc0 00:27:26.444 [2024-10-17 16:42:02.650909] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:27:26.444 [2024-10-17 16:42:02.651031] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:27:26.444 [2024-10-17 16:42:02.651046] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:27:26.444 [2024-10-17 16:42:02.651056] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:27:26.444 [2024-10-17 16:42:02.659845] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:27:26.444 [2024-10-17 16:42:02.659880] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:27:26.444 [2024-10-17 16:42:02.666738] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:27:26.444 [2024-10-17 16:42:02.666856] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:27:26.444 [2024-10-17 16:42:02.683750] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:27:26.444 0 00:27:26.444 16:42:02 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.444 16:42:02 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:27:26.444 16:42:02 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.444 16:42:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:27:27.049 16:42:03 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.049 16:42:03 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:27:27.049 "subsystems": [ 00:27:27.049 { 00:27:27.049 "subsystem": "fsdev", 00:27:27.049 "config": [ 00:27:27.049 { 00:27:27.049 "method": "fsdev_set_opts", 00:27:27.049 "params": { 00:27:27.049 "fsdev_io_pool_size": 65535, 00:27:27.049 "fsdev_io_cache_size": 256 00:27:27.049 } 00:27:27.049 } 00:27:27.049 ] 00:27:27.049 }, 00:27:27.049 { 00:27:27.049 "subsystem": "keyring", 00:27:27.049 "config": [] 00:27:27.049 }, 00:27:27.049 { 00:27:27.049 "subsystem": "iobuf", 00:27:27.049 "config": [ 00:27:27.049 { 00:27:27.049 "method": "iobuf_set_options", 00:27:27.049 "params": { 00:27:27.049 "small_pool_count": 8192, 00:27:27.049 "large_pool_count": 1024, 00:27:27.049 "small_bufsize": 8192, 00:27:27.049 "large_bufsize": 135168 00:27:27.049 } 00:27:27.049 } 00:27:27.049 ] 00:27:27.049 }, 00:27:27.049 { 00:27:27.049 "subsystem": "sock", 00:27:27.049 "config": [ 00:27:27.049 { 00:27:27.049 "method": "sock_set_default_impl", 00:27:27.049 "params": { 00:27:27.049 "impl_name": "posix" 00:27:27.049 } 00:27:27.049 }, 00:27:27.049 { 00:27:27.049 "method": "sock_impl_set_options", 00:27:27.049 "params": { 00:27:27.049 "impl_name": "ssl", 00:27:27.049 "recv_buf_size": 4096, 00:27:27.049 "send_buf_size": 4096, 00:27:27.049 "enable_recv_pipe": true, 00:27:27.049 "enable_quickack": false, 00:27:27.049 "enable_placement_id": 0, 00:27:27.049 "enable_zerocopy_send_server": true, 00:27:27.049 "enable_zerocopy_send_client": false, 00:27:27.049 "zerocopy_threshold": 0, 00:27:27.049 "tls_version": 0, 00:27:27.049 "enable_ktls": false 00:27:27.049 } 00:27:27.049 }, 00:27:27.049 { 00:27:27.049 "method": "sock_impl_set_options", 00:27:27.049 "params": { 00:27:27.049 "impl_name": "posix", 00:27:27.049 "recv_buf_size": 2097152, 00:27:27.049 "send_buf_size": 2097152, 00:27:27.049 "enable_recv_pipe": true, 00:27:27.049 "enable_quickack": false, 00:27:27.049 "enable_placement_id": 0, 00:27:27.049 "enable_zerocopy_send_server": true, 00:27:27.049 "enable_zerocopy_send_client": false, 00:27:27.049 "zerocopy_threshold": 0, 00:27:27.049 "tls_version": 0, 00:27:27.049 "enable_ktls": false 00:27:27.049 } 00:27:27.049 } 00:27:27.049 ] 00:27:27.049 }, 00:27:27.049 { 00:27:27.049 "subsystem": "vmd", 00:27:27.049 "config": [] 00:27:27.049 }, 00:27:27.049 { 00:27:27.049 "subsystem": "accel", 00:27:27.049 "config": [ 00:27:27.049 { 00:27:27.049 "method": "accel_set_options", 00:27:27.049 "params": { 00:27:27.049 "small_cache_size": 128, 00:27:27.049 "large_cache_size": 16, 00:27:27.049 "task_count": 2048, 00:27:27.049 "sequence_count": 2048, 00:27:27.049 "buf_count": 2048 00:27:27.049 } 00:27:27.049 } 00:27:27.049 ] 00:27:27.049 }, 00:27:27.049 { 00:27:27.049 "subsystem": "bdev", 00:27:27.049 "config": [ 00:27:27.049 { 00:27:27.049 "method": "bdev_set_options", 00:27:27.049 "params": { 00:27:27.049 "bdev_io_pool_size": 65535, 00:27:27.049 "bdev_io_cache_size": 256, 00:27:27.049 "bdev_auto_examine": true, 00:27:27.049 "iobuf_small_cache_size": 128, 00:27:27.049 "iobuf_large_cache_size": 16 00:27:27.049 } 00:27:27.049 }, 00:27:27.049 { 00:27:27.049 "method": "bdev_raid_set_options", 00:27:27.049 "params": { 00:27:27.049 "process_window_size_kb": 1024, 00:27:27.049 "process_max_bandwidth_mb_sec": 0 00:27:27.049 } 00:27:27.049 }, 00:27:27.049 { 00:27:27.049 "method": "bdev_iscsi_set_options", 00:27:27.049 "params": { 00:27:27.049 "timeout_sec": 30 00:27:27.049 } 00:27:27.049 }, 00:27:27.049 { 00:27:27.049 "method": "bdev_nvme_set_options", 00:27:27.049 "params": { 00:27:27.049 "action_on_timeout": "none", 00:27:27.049 "timeout_us": 0, 00:27:27.049 "timeout_admin_us": 0, 00:27:27.049 "keep_alive_timeout_ms": 10000, 00:27:27.049 "arbitration_burst": 0, 00:27:27.049 "low_priority_weight": 0, 00:27:27.049 "medium_priority_weight": 0, 00:27:27.049 "high_priority_weight": 0, 00:27:27.049 "nvme_adminq_poll_period_us": 10000, 00:27:27.050 "nvme_ioq_poll_period_us": 0, 00:27:27.050 "io_queue_requests": 0, 00:27:27.050 "delay_cmd_submit": true, 00:27:27.050 "transport_retry_count": 4, 00:27:27.050 "bdev_retry_count": 3, 00:27:27.050 "transport_ack_timeout": 0, 00:27:27.050 "ctrlr_loss_timeout_sec": 0, 00:27:27.050 "reconnect_delay_sec": 0, 00:27:27.050 "fast_io_fail_timeout_sec": 0, 00:27:27.050 "disable_auto_failback": false, 00:27:27.050 "generate_uuids": false, 00:27:27.050 "transport_tos": 0, 00:27:27.050 "nvme_error_stat": false, 00:27:27.050 "rdma_srq_size": 0, 00:27:27.050 "io_path_stat": false, 00:27:27.050 "allow_accel_sequence": false, 00:27:27.050 "rdma_max_cq_size": 0, 00:27:27.050 "rdma_cm_event_timeout_ms": 0, 00:27:27.050 "dhchap_digests": [ 00:27:27.050 "sha256", 00:27:27.050 "sha384", 00:27:27.050 "sha512" 00:27:27.050 ], 00:27:27.050 "dhchap_dhgroups": [ 00:27:27.050 "null", 00:27:27.050 "ffdhe2048", 00:27:27.050 "ffdhe3072", 00:27:27.050 "ffdhe4096", 00:27:27.050 "ffdhe6144", 00:27:27.050 "ffdhe8192" 00:27:27.050 ] 00:27:27.050 } 00:27:27.050 }, 00:27:27.050 { 00:27:27.050 "method": "bdev_nvme_set_hotplug", 00:27:27.050 "params": { 00:27:27.050 "period_us": 100000, 00:27:27.050 "enable": false 00:27:27.050 } 00:27:27.050 }, 00:27:27.050 { 00:27:27.050 "method": "bdev_malloc_create", 00:27:27.050 "params": { 00:27:27.050 "name": "malloc0", 00:27:27.050 "num_blocks": 8192, 00:27:27.050 "block_size": 4096, 00:27:27.050 "physical_block_size": 4096, 00:27:27.050 "uuid": "1c7f6c6d-8716-4324-9534-428097f0f7fd", 00:27:27.050 "optimal_io_boundary": 0, 00:27:27.050 "md_size": 0, 00:27:27.050 "dif_type": 0, 00:27:27.050 "dif_is_head_of_md": false, 00:27:27.050 "dif_pi_format": 0 00:27:27.050 } 00:27:27.050 }, 00:27:27.050 { 00:27:27.050 "method": "bdev_wait_for_examine" 00:27:27.050 } 00:27:27.050 ] 00:27:27.050 }, 00:27:27.050 { 00:27:27.050 "subsystem": "scsi", 00:27:27.050 "config": null 00:27:27.050 }, 00:27:27.050 { 00:27:27.050 "subsystem": "scheduler", 00:27:27.050 "config": [ 00:27:27.050 { 00:27:27.050 "method": "framework_set_scheduler", 00:27:27.050 "params": { 00:27:27.050 "name": "static" 00:27:27.050 } 00:27:27.050 } 00:27:27.050 ] 00:27:27.050 }, 00:27:27.050 { 00:27:27.050 "subsystem": "vhost_scsi", 00:27:27.050 "config": [] 00:27:27.050 }, 00:27:27.050 { 00:27:27.050 "subsystem": "vhost_blk", 00:27:27.050 "config": [] 00:27:27.050 }, 00:27:27.050 { 00:27:27.050 "subsystem": "ublk", 00:27:27.050 "config": [ 00:27:27.050 { 00:27:27.050 "method": "ublk_create_target", 00:27:27.050 "params": { 00:27:27.050 "cpumask": "1" 00:27:27.050 } 00:27:27.050 }, 00:27:27.050 { 00:27:27.050 "method": "ublk_start_disk", 00:27:27.050 "params": { 00:27:27.050 "bdev_name": "malloc0", 00:27:27.050 "ublk_id": 0, 00:27:27.050 "num_queues": 1, 00:27:27.050 "queue_depth": 128 00:27:27.050 } 00:27:27.050 } 00:27:27.050 ] 00:27:27.050 }, 00:27:27.050 { 00:27:27.050 "subsystem": "nbd", 00:27:27.050 "config": [] 00:27:27.050 }, 00:27:27.050 { 00:27:27.050 "subsystem": "nvmf", 00:27:27.050 "config": [ 00:27:27.050 { 00:27:27.050 "method": "nvmf_set_config", 00:27:27.050 "params": { 00:27:27.050 "discovery_filter": "match_any", 00:27:27.050 "admin_cmd_passthru": { 00:27:27.050 "identify_ctrlr": false 00:27:27.050 }, 00:27:27.050 "dhchap_digests": [ 00:27:27.050 "sha256", 00:27:27.050 "sha384", 00:27:27.050 "sha512" 00:27:27.050 ], 00:27:27.050 "dhchap_dhgroups": [ 00:27:27.050 "null", 00:27:27.050 "ffdhe2048", 00:27:27.050 "ffdhe3072", 00:27:27.050 "ffdhe4096", 00:27:27.050 "ffdhe6144", 00:27:27.050 "ffdhe8192" 00:27:27.050 ] 00:27:27.050 } 00:27:27.050 }, 00:27:27.050 { 00:27:27.050 "method": "nvmf_set_max_subsystems", 00:27:27.050 "params": { 00:27:27.050 "max_subsystems": 1024 00:27:27.050 } 00:27:27.050 }, 00:27:27.050 { 00:27:27.050 "method": "nvmf_set_crdt", 00:27:27.050 "params": { 00:27:27.050 "crdt1": 0, 00:27:27.050 "crdt2": 0, 00:27:27.050 "crdt3": 0 00:27:27.050 } 00:27:27.050 } 00:27:27.050 ] 00:27:27.050 }, 00:27:27.050 { 00:27:27.050 "subsystem": "iscsi", 00:27:27.050 "config": [ 00:27:27.050 { 00:27:27.050 "method": "iscsi_set_options", 00:27:27.050 "params": { 00:27:27.050 "node_base": "iqn.2016-06.io.spdk", 00:27:27.050 "max_sessions": 128, 00:27:27.050 "max_connections_per_session": 2, 00:27:27.050 "max_queue_depth": 64, 00:27:27.050 "default_time2wait": 2, 00:27:27.050 "default_time2retain": 20, 00:27:27.050 "first_burst_length": 8192, 00:27:27.050 "immediate_data": true, 00:27:27.050 "allow_duplicated_isid": false, 00:27:27.050 "error_recovery_level": 0, 00:27:27.050 "nop_timeout": 60, 00:27:27.050 "nop_in_interval": 30, 00:27:27.050 "disable_chap": false, 00:27:27.050 "require_chap": false, 00:27:27.050 "mutual_chap": false, 00:27:27.050 "chap_group": 0, 00:27:27.050 "max_large_datain_per_connection": 64, 00:27:27.050 "max_r2t_per_connection": 4, 00:27:27.050 "pdu_pool_size": 36864, 00:27:27.050 "immediate_data_pool_size": 16384, 00:27:27.050 "data_out_pool_size": 2048 00:27:27.050 } 00:27:27.050 } 00:27:27.050 ] 00:27:27.050 } 00:27:27.050 ] 00:27:27.050 }' 00:27:27.050 16:42:03 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72413 00:27:27.050 16:42:03 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 72413 ']' 00:27:27.050 16:42:03 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 72413 00:27:27.050 16:42:03 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:27:27.050 16:42:03 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:27.050 16:42:03 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72413 00:27:27.050 killing process with pid 72413 00:27:27.050 16:42:03 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:27.050 16:42:03 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:27.050 16:42:03 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72413' 00:27:27.050 16:42:03 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 72413 00:27:27.050 16:42:03 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 72413 00:27:28.427 [2024-10-17 16:42:04.555602] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:27:28.427 [2024-10-17 16:42:04.590800] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:27:28.427 [2024-10-17 16:42:04.590954] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:27:28.427 [2024-10-17 16:42:04.597740] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:27:28.427 [2024-10-17 16:42:04.597818] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:27:28.427 [2024-10-17 16:42:04.597837] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:27:28.427 [2024-10-17 16:42:04.597861] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:27:28.427 [2024-10-17 16:42:04.598018] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:27:30.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:30.330 16:42:06 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72481 00:27:30.330 16:42:06 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72481 00:27:30.330 16:42:06 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 72481 ']' 00:27:30.330 16:42:06 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:30.330 16:42:06 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:30.330 16:42:06 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:30.330 16:42:06 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:30.330 16:42:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:27:30.330 16:42:06 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:27:30.330 "subsystems": [ 00:27:30.330 { 00:27:30.330 "subsystem": "fsdev", 00:27:30.330 "config": [ 00:27:30.330 { 00:27:30.330 "method": "fsdev_set_opts", 00:27:30.330 "params": { 00:27:30.330 "fsdev_io_pool_size": 65535, 00:27:30.330 "fsdev_io_cache_size": 256 00:27:30.330 } 00:27:30.330 } 00:27:30.330 ] 00:27:30.330 }, 00:27:30.330 { 00:27:30.330 "subsystem": "keyring", 00:27:30.330 "config": [] 00:27:30.330 }, 00:27:30.330 { 00:27:30.330 "subsystem": "iobuf", 00:27:30.330 "config": [ 00:27:30.330 { 00:27:30.330 "method": "iobuf_set_options", 00:27:30.330 "params": { 00:27:30.330 "small_pool_count": 8192, 00:27:30.330 "large_pool_count": 1024, 00:27:30.330 "small_bufsize": 8192, 00:27:30.330 "large_bufsize": 135168 00:27:30.330 } 00:27:30.330 } 00:27:30.330 ] 00:27:30.330 }, 00:27:30.330 { 00:27:30.330 "subsystem": "sock", 00:27:30.330 "config": [ 00:27:30.330 { 00:27:30.330 "method": "sock_set_default_impl", 00:27:30.330 "params": { 00:27:30.330 "impl_name": "posix" 00:27:30.330 } 00:27:30.330 }, 00:27:30.330 { 00:27:30.330 "method": "sock_impl_set_options", 00:27:30.330 "params": { 00:27:30.330 "impl_name": "ssl", 00:27:30.330 "recv_buf_size": 4096, 00:27:30.330 "send_buf_size": 4096, 00:27:30.330 "enable_recv_pipe": true, 00:27:30.330 "enable_quickack": false, 00:27:30.330 "enable_placement_id": 0, 00:27:30.330 "enable_zerocopy_send_server": true, 00:27:30.330 "enable_zerocopy_send_client": false, 00:27:30.330 "zerocopy_threshold": 0, 00:27:30.330 "tls_version": 0, 00:27:30.330 "enable_ktls": false 00:27:30.330 } 00:27:30.330 }, 00:27:30.330 { 00:27:30.330 "method": "sock_impl_set_options", 00:27:30.330 "params": { 00:27:30.330 "impl_name": "posix", 00:27:30.330 "recv_buf_size": 2097152, 00:27:30.330 "send_buf_size": 2097152, 00:27:30.330 "enable_recv_pipe": true, 00:27:30.330 "enable_quickack": false, 00:27:30.330 "enable_placement_id": 0, 00:27:30.330 "enable_zerocopy_send_server": true, 00:27:30.330 "enable_zerocopy_send_client": false, 00:27:30.330 "zerocopy_threshold": 0, 00:27:30.330 "tls_version": 0, 00:27:30.330 "enable_ktls": false 00:27:30.330 } 00:27:30.330 } 00:27:30.330 ] 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "subsystem": "vmd", 00:27:30.331 "config": [] 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "subsystem": "accel", 00:27:30.331 "config": [ 00:27:30.331 { 00:27:30.331 "method": "accel_set_options", 00:27:30.331 "params": { 00:27:30.331 "small_cache_size": 128, 00:27:30.331 "large_cache_size": 16, 00:27:30.331 "task_count": 2048, 00:27:30.331 "sequence_count": 2048, 00:27:30.331 "buf_count": 2048 00:27:30.331 } 00:27:30.331 } 00:27:30.331 ] 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "subsystem": "bdev", 00:27:30.331 "config": [ 00:27:30.331 { 00:27:30.331 "method": "bdev_set_options", 00:27:30.331 "params": { 00:27:30.331 "bdev_io_pool_size": 65535, 00:27:30.331 "bdev_io_cache_size": 256, 00:27:30.331 "bdev_auto_examine": true, 00:27:30.331 "iobuf_small_cache_size": 128, 00:27:30.331 "iobuf_large_cache_size": 16 00:27:30.331 } 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "method": "bdev_raid_set_options", 00:27:30.331 "params": { 00:27:30.331 "process_window_size_kb": 1024, 00:27:30.331 "process_max_bandwidth_mb_sec": 0 00:27:30.331 } 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "method": "bdev_iscsi_set_options", 00:27:30.331 "params": { 00:27:30.331 "timeout_sec": 30 00:27:30.331 } 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "method": "bdev_nvme_set_options", 00:27:30.331 "params": { 00:27:30.331 "action_on_timeout": "none", 00:27:30.331 "timeout_us": 0, 00:27:30.331 "timeout_admin_us": 0, 00:27:30.331 "keep_alive_timeout_ms": 10000, 00:27:30.331 "arbitration_burst": 0, 00:27:30.331 "low_priority_weight": 0, 00:27:30.331 "medium_priority_weight": 0, 00:27:30.331 "high_priority_weight": 0, 00:27:30.331 "nvme_adminq_poll_period_us": 10000, 00:27:30.331 "nvme_ioq_poll_period_us": 0, 00:27:30.331 "io_queue_requests": 0, 00:27:30.331 "delay_cmd_submit": true, 00:27:30.331 "transport_retry_count": 4, 00:27:30.331 "bdev_retry_count": 3, 00:27:30.331 "transport_ack_timeout": 0, 00:27:30.331 "ctrlr_loss_timeout_sec": 0, 00:27:30.331 "reconnect_delay_sec": 0, 00:27:30.331 "fast_io_fail_timeout_sec": 0, 00:27:30.331 "disable_auto_failback": false, 00:27:30.331 "generate_uuids": false, 00:27:30.331 "transport_tos": 0, 00:27:30.331 "nvme_error_stat": false, 00:27:30.331 "rdma_srq_size": 0, 00:27:30.331 "io_path_stat": false, 00:27:30.331 "allow_accel_sequence": false, 00:27:30.331 "rdma_max_cq_size": 0, 00:27:30.331 "rdma_cm_event_timeout_ms": 0, 00:27:30.331 "dhchap_digests": [ 00:27:30.331 "sha256", 00:27:30.331 "sha384", 00:27:30.331 "sha512" 00:27:30.331 ], 00:27:30.331 "dhchap_dhgroups": [ 00:27:30.331 "null", 00:27:30.331 "ffdhe2048", 00:27:30.331 "ffdhe3072", 00:27:30.331 "ffdhe4096", 00:27:30.331 "ffdhe6144", 00:27:30.331 "ffdhe8192" 00:27:30.331 ] 00:27:30.331 } 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "method": "bdev_nvme_set_hotplug", 00:27:30.331 "params": { 00:27:30.331 "period_us": 100000, 00:27:30.331 "enable": false 00:27:30.331 } 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "method": "bdev_malloc_create", 00:27:30.331 "params": { 00:27:30.331 "name": "malloc0", 00:27:30.331 "num_blocks": 8192, 00:27:30.331 "block_size": 4096, 00:27:30.331 "physical_block_size": 4096, 00:27:30.331 "uuid": "1c7f6c6d-8716-4324-9534-428097f0f7fd", 00:27:30.331 "optimal_io_boundary": 0, 00:27:30.331 "md_size": 0, 00:27:30.331 "dif_type": 0, 00:27:30.331 "dif_is_head_of_md": false, 00:27:30.331 "dif_pi_format": 0 00:27:30.331 } 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "method": "bdev_wait_for_examine" 00:27:30.331 } 00:27:30.331 ] 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "subsystem": "scsi", 00:27:30.331 "config": null 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "subsystem": "scheduler", 00:27:30.331 "config": [ 00:27:30.331 { 00:27:30.331 "method": "framework_set_scheduler", 00:27:30.331 "params": { 00:27:30.331 "name": "static" 00:27:30.331 } 00:27:30.331 } 00:27:30.331 ] 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "subsystem": "vhost_scsi", 00:27:30.331 "config": [] 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "subsystem": "vhost_blk", 00:27:30.331 "config": [] 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "subsystem": "ublk", 00:27:30.331 "config": [ 00:27:30.331 { 00:27:30.331 "method": "ublk_create_target", 00:27:30.331 "params": { 00:27:30.331 "cpumask": "1" 00:27:30.331 } 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "method": "ublk_start_disk", 00:27:30.331 "params": { 00:27:30.331 "bdev_name": "malloc0", 00:27:30.331 "ublk_id": 0, 00:27:30.331 "num_queues": 1, 00:27:30.331 "queue_depth": 128 00:27:30.331 } 00:27:30.331 } 00:27:30.331 ] 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "subsystem": "nbd", 00:27:30.331 "config": [] 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "subsystem": "nvmf", 00:27:30.331 "config": [ 00:27:30.331 { 00:27:30.331 "method": "nvmf_set_config", 00:27:30.331 "params": { 00:27:30.331 "discovery_filter": "match_any", 00:27:30.331 "admin_cmd_passthru": { 00:27:30.331 "identify_ctrlr": false 00:27:30.331 }, 00:27:30.331 "dhchap_digests": [ 00:27:30.331 "sha256", 00:27:30.331 "sha384", 00:27:30.331 "sha512" 00:27:30.331 ], 00:27:30.331 "dhchap_dhgroups": [ 00:27:30.331 "null", 00:27:30.331 "ffdhe2048", 00:27:30.331 "ffdhe3072", 00:27:30.331 "ffdhe4096", 00:27:30.331 "ffdhe6144", 00:27:30.331 "ffdhe8192" 00:27:30.331 ] 00:27:30.331 } 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "method": "nvmf_set_max_subsystems", 00:27:30.331 "params": { 00:27:30.331 "max_subsystems": 1024 00:27:30.331 } 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "method": "nvmf_set_crdt", 00:27:30.331 "params": { 00:27:30.331 "crdt1": 0, 00:27:30.331 "crdt2": 0, 00:27:30.331 "crdt3": 0 00:27:30.331 } 00:27:30.331 } 00:27:30.331 ] 00:27:30.331 }, 00:27:30.331 { 00:27:30.331 "subsystem": "iscsi", 00:27:30.331 "config": [ 00:27:30.331 { 00:27:30.331 "method": "iscsi_set_options", 00:27:30.331 "params": { 00:27:30.331 "node_base": "iqn.2016-06.io.spdk", 00:27:30.331 "max_sessions": 128, 00:27:30.331 "max_connections_per_session": 2, 00:27:30.331 "max_queue_depth": 64, 00:27:30.331 "default_time2wait": 2, 00:27:30.331 "default_time2retain": 20, 00:27:30.331 "first_burst_length": 8192, 00:27:30.331 "immediate_data": true, 00:27:30.331 "allow_duplicated_isid": false, 00:27:30.331 "error_recovery_level": 0, 00:27:30.331 "nop_timeout": 60, 00:27:30.331 "nop_in_interval": 30, 00:27:30.331 "disable_chap": false, 00:27:30.331 "require_chap": false, 00:27:30.331 "mutual_chap": false, 00:27:30.331 "chap_group": 0, 00:27:30.331 "max_large_datain_per_connection": 64, 00:27:30.331 "max_r2t_per_connection": 4, 00:27:30.331 "pdu_pool_size": 36864, 00:27:30.331 "immediate_data_pool_size": 16384, 00:27:30.331 "data_out_pool_size": 2048 00:27:30.331 } 00:27:30.331 } 00:27:30.331 ] 00:27:30.331 } 00:27:30.331 ] 00:27:30.331 }' 00:27:30.331 16:42:06 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:27:30.590 [2024-10-17 16:42:06.665929] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:27:30.590 [2024-10-17 16:42:06.666306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72481 ] 00:27:30.590 [2024-10-17 16:42:06.826766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.849 [2024-10-17 16:42:06.991587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.255 [2024-10-17 16:42:08.126719] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:27:32.255 [2024-10-17 16:42:08.128090] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:27:32.255 [2024-10-17 16:42:08.134865] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:27:32.255 [2024-10-17 16:42:08.134991] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:27:32.255 [2024-10-17 16:42:08.135006] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:27:32.255 [2024-10-17 16:42:08.135016] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:27:32.255 [2024-10-17 16:42:08.143909] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:27:32.255 [2024-10-17 16:42:08.143948] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:27:32.255 [2024-10-17 16:42:08.150751] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:27:32.255 [2024-10-17 16:42:08.150867] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:27:32.255 [2024-10-17 16:42:08.167728] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72481 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 72481 ']' 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 72481 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72481 00:27:32.255 killing process with pid 72481 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72481' 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 72481 00:27:32.255 16:42:08 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 72481 00:27:33.629 [2024-10-17 16:42:09.879618] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:27:33.629 [2024-10-17 16:42:09.916736] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:27:33.629 [2024-10-17 16:42:09.916889] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:27:33.887 [2024-10-17 16:42:09.931719] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:27:33.887 [2024-10-17 16:42:09.931790] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:27:33.887 [2024-10-17 16:42:09.931800] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:27:33.887 [2024-10-17 16:42:09.931828] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:27:33.887 [2024-10-17 16:42:09.931979] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:27:35.788 16:42:11 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:27:35.788 00:27:35.788 real 0m10.608s 00:27:35.788 user 0m8.480s 00:27:35.788 sys 0m3.144s 00:27:35.788 ************************************ 00:27:35.788 END TEST test_save_ublk_config 00:27:35.788 ************************************ 00:27:35.788 16:42:11 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:35.788 16:42:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:27:35.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.788 16:42:11 ublk -- ublk/ublk.sh@139 -- # spdk_pid=72575 00:27:35.788 16:42:11 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:35.788 16:42:11 ublk -- ublk/ublk.sh@141 -- # waitforlisten 72575 00:27:35.788 16:42:11 ublk -- common/autotest_common.sh@831 -- # '[' -z 72575 ']' 00:27:35.788 16:42:11 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.788 16:42:11 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.788 16:42:11 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.788 16:42:11 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.788 16:42:11 ublk -- common/autotest_common.sh@10 -- # set +x 00:27:35.788 16:42:11 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:27:35.788 [2024-10-17 16:42:11.954766] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:27:35.788 [2024-10-17 16:42:11.954899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72575 ] 00:27:36.046 [2024-10-17 16:42:12.116917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:36.046 [2024-10-17 16:42:12.239015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.046 [2024-10-17 16:42:12.239023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.982 16:42:13 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:36.982 16:42:13 ublk -- common/autotest_common.sh@864 -- # return 0 00:27:36.982 16:42:13 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:27:36.982 16:42:13 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:36.982 16:42:13 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:36.982 16:42:13 ublk -- common/autotest_common.sh@10 -- # set +x 00:27:36.982 ************************************ 00:27:36.982 START TEST test_create_ublk 00:27:36.982 ************************************ 00:27:36.982 16:42:13 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:27:36.982 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:27:36.982 16:42:13 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.982 16:42:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:36.982 [2024-10-17 16:42:13.162720] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:27:36.982 [2024-10-17 16:42:13.165537] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:27:36.982 16:42:13 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.982 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:27:36.982 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:27:36.982 16:42:13 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.982 16:42:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:37.242 16:42:13 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.242 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:27:37.242 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:27:37.242 16:42:13 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.242 16:42:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:37.242 [2024-10-17 16:42:13.457904] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:27:37.242 [2024-10-17 16:42:13.458377] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:27:37.242 [2024-10-17 16:42:13.458399] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:27:37.242 [2024-10-17 16:42:13.458408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:27:37.242 [2024-10-17 16:42:13.467041] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:27:37.242 [2024-10-17 16:42:13.467070] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:27:37.242 [2024-10-17 16:42:13.473742] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:27:37.242 [2024-10-17 16:42:13.491787] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:27:37.242 [2024-10-17 16:42:13.504845] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:27:37.242 16:42:13 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.242 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:27:37.242 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:27:37.242 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:27:37.242 16:42:13 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.242 16:42:13 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:37.242 16:42:13 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.242 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:27:37.242 { 00:27:37.242 "ublk_device": "/dev/ublkb0", 00:27:37.242 "id": 0, 00:27:37.242 "queue_depth": 512, 00:27:37.242 "num_queues": 4, 00:27:37.242 "bdev_name": "Malloc0" 00:27:37.242 } 00:27:37.242 ]' 00:27:37.501 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:27:37.501 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:27:37.501 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:27:37.501 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:27:37.501 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:27:37.501 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:27:37.501 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:27:37.501 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:27:37.501 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:27:37.501 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:27:37.501 16:42:13 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:27:37.501 16:42:13 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:27:37.501 16:42:13 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:27:37.501 16:42:13 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:27:37.501 16:42:13 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:27:37.501 16:42:13 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:27:37.501 16:42:13 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:27:37.501 16:42:13 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:27:37.501 16:42:13 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:27:37.501 16:42:13 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:27:37.501 16:42:13 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:27:37.501 16:42:13 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:27:37.760 fio: verification read phase will never start because write phase uses all of runtime 00:27:37.760 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:27:37.760 fio-3.35 00:27:37.760 Starting 1 process 00:27:47.764 00:27:47.764 fio_test: (groupid=0, jobs=1): err= 0: pid=72627: Thu Oct 17 16:42:23 2024 00:27:47.765 write: IOPS=15.8k, BW=61.5MiB/s (64.5MB/s)(615MiB/10001msec); 0 zone resets 00:27:47.765 clat (usec): min=38, max=4026, avg=62.61, stdev=98.81 00:27:47.765 lat (usec): min=38, max=4026, avg=63.10, stdev=98.82 00:27:47.765 clat percentiles (usec): 00:27:47.765 | 1.00th=[ 40], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 56], 00:27:47.765 | 30.00th=[ 57], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:27:47.765 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 66], 95.00th=[ 71], 00:27:47.765 | 99.00th=[ 84], 99.50th=[ 89], 99.90th=[ 2073], 99.95th=[ 2802], 00:27:47.765 | 99.99th=[ 3556] 00:27:47.765 bw ( KiB/s): min=58760, max=74968, per=100.00%, avg=63055.16, stdev=3500.38, samples=19 00:27:47.765 iops : min=14690, max=18742, avg=15763.79, stdev=875.09, samples=19 00:27:47.765 lat (usec) : 50=3.64%, 100=96.08%, 250=0.06%, 500=0.02%, 750=0.02% 00:27:47.765 lat (usec) : 1000=0.01% 00:27:47.765 lat (msec) : 2=0.07%, 4=0.10%, 10=0.01% 00:27:47.765 cpu : usr=3.22%, sys=10.43%, ctx=157549, majf=0, minf=797 00:27:47.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:47.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.765 issued rwts: total=0,157549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:47.765 00:27:47.765 Run status group 0 (all jobs): 00:27:47.765 WRITE: bw=61.5MiB/s (64.5MB/s), 61.5MiB/s-61.5MiB/s (64.5MB/s-64.5MB/s), io=615MiB (645MB), run=10001-10001msec 00:27:47.765 00:27:47.765 Disk stats (read/write): 00:27:47.765 ublkb0: ios=0/155855, merge=0/0, ticks=0/8597, in_queue=8598, util=99.11% 00:27:47.765 16:42:23 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:27:47.765 16:42:23 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.765 16:42:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:47.765 [2024-10-17 16:42:23.988528] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:27:47.765 [2024-10-17 16:42:24.023174] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:27:47.765 [2024-10-17 16:42:24.024088] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:27:47.765 [2024-10-17 16:42:24.028827] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:27:47.765 [2024-10-17 16:42:24.029125] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:27:47.765 [2024-10-17 16:42:24.029139] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:27:47.765 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.765 16:42:24 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:27:47.765 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:27:47.765 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:27:47.765 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:47.765 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:47.765 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:47.765 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:47.765 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:27:47.765 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.765 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:47.765 [2024-10-17 16:42:24.052857] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:27:48.025 request: 00:27:48.025 { 00:27:48.025 "ublk_id": 0, 00:27:48.025 "method": "ublk_stop_disk", 00:27:48.025 "req_id": 1 00:27:48.025 } 00:27:48.025 Got JSON-RPC error response 00:27:48.025 response: 00:27:48.025 { 00:27:48.025 "code": -19, 00:27:48.025 "message": "No such device" 00:27:48.025 } 00:27:48.025 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:48.025 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:27:48.025 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:48.025 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:48.025 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:48.025 16:42:24 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:27:48.025 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.025 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:48.025 [2024-10-17 16:42:24.075856] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:27:48.025 [2024-10-17 16:42:24.083122] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:27:48.025 [2024-10-17 16:42:24.083189] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:27:48.025 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.025 16:42:24 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:48.025 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.025 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:48.592 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.592 16:42:24 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:27:48.592 16:42:24 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:27:48.592 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.592 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:48.592 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.592 16:42:24 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:27:48.592 16:42:24 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:27:48.592 16:42:24 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:27:48.851 16:42:24 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:27:48.851 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.851 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:48.851 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.851 16:42:24 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:27:48.851 16:42:24 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:27:48.851 ************************************ 00:27:48.851 END TEST test_create_ublk 00:27:48.851 ************************************ 00:27:48.851 16:42:24 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:27:48.851 00:27:48.851 real 0m11.800s 00:27:48.851 user 0m0.702s 00:27:48.851 sys 0m1.175s 00:27:48.851 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:48.851 16:42:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:48.851 16:42:25 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:27:48.851 16:42:25 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:48.851 16:42:25 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:48.851 16:42:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:27:48.851 ************************************ 00:27:48.851 START TEST test_create_multi_ublk 00:27:48.851 ************************************ 00:27:48.851 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:27:48.851 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:27:48.851 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.851 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:48.851 [2024-10-17 16:42:25.041735] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:27:48.851 [2024-10-17 16:42:25.044373] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:27:48.851 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.851 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:27:48.851 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:27:48.851 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:48.851 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:27:48.851 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.851 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:49.420 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.420 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:27:49.420 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:27:49.420 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.420 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:49.420 [2024-10-17 16:42:25.468913] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:27:49.420 [2024-10-17 16:42:25.469450] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:27:49.420 [2024-10-17 16:42:25.469473] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:27:49.420 [2024-10-17 16:42:25.469488] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:27:49.420 [2024-10-17 16:42:25.480787] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:27:49.420 [2024-10-17 16:42:25.480822] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:27:49.420 [2024-10-17 16:42:25.492735] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:27:49.420 [2024-10-17 16:42:25.493363] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:27:49.420 [2024-10-17 16:42:25.518792] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:27:49.420 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.420 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:27:49.420 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:49.420 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:27:49.421 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.421 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:49.680 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.680 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:27:49.680 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:27:49.680 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.680 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:49.680 [2024-10-17 16:42:25.840881] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:27:49.680 [2024-10-17 16:42:25.841338] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:27:49.680 [2024-10-17 16:42:25.841359] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:27:49.680 [2024-10-17 16:42:25.841367] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:27:49.680 [2024-10-17 16:42:25.848754] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:27:49.680 [2024-10-17 16:42:25.848785] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:27:49.680 [2024-10-17 16:42:25.856737] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:27:49.680 [2024-10-17 16:42:25.857386] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:27:49.680 [2024-10-17 16:42:25.865775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:27:49.680 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.680 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:27:49.680 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:49.680 16:42:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:27:49.680 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.680 16:42:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:49.940 16:42:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.940 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:27:49.940 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:27:49.940 16:42:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.940 16:42:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:49.940 [2024-10-17 16:42:26.179899] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:27:49.940 [2024-10-17 16:42:26.180382] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:27:49.940 [2024-10-17 16:42:26.180415] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:27:49.940 [2024-10-17 16:42:26.180426] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:27:49.940 [2024-10-17 16:42:26.187758] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:27:49.940 [2024-10-17 16:42:26.187792] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:27:49.940 [2024-10-17 16:42:26.195742] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:27:49.940 [2024-10-17 16:42:26.196386] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:27:49.940 [2024-10-17 16:42:26.207811] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:27:49.940 16:42:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.940 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:27:49.940 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:49.940 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:27:49.940 16:42:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.940 16:42:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:50.533 16:42:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.533 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:27:50.533 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:27:50.533 16:42:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.533 16:42:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:50.533 [2024-10-17 16:42:26.508893] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:27:50.533 [2024-10-17 16:42:26.509338] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:27:50.533 [2024-10-17 16:42:26.509359] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:27:50.533 [2024-10-17 16:42:26.509367] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:27:50.534 [2024-10-17 16:42:26.522742] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:27:50.534 [2024-10-17 16:42:26.522769] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:27:50.534 [2024-10-17 16:42:26.530752] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:27:50.534 [2024-10-17 16:42:26.531317] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:27:50.534 [2024-10-17 16:42:26.547753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:27:50.534 { 00:27:50.534 "ublk_device": "/dev/ublkb0", 00:27:50.534 "id": 0, 00:27:50.534 "queue_depth": 512, 00:27:50.534 "num_queues": 4, 00:27:50.534 "bdev_name": "Malloc0" 00:27:50.534 }, 00:27:50.534 { 00:27:50.534 "ublk_device": "/dev/ublkb1", 00:27:50.534 "id": 1, 00:27:50.534 "queue_depth": 512, 00:27:50.534 "num_queues": 4, 00:27:50.534 "bdev_name": "Malloc1" 00:27:50.534 }, 00:27:50.534 { 00:27:50.534 "ublk_device": "/dev/ublkb2", 00:27:50.534 "id": 2, 00:27:50.534 "queue_depth": 512, 00:27:50.534 "num_queues": 4, 00:27:50.534 "bdev_name": "Malloc2" 00:27:50.534 }, 00:27:50.534 { 00:27:50.534 "ublk_device": "/dev/ublkb3", 00:27:50.534 "id": 3, 00:27:50.534 "queue_depth": 512, 00:27:50.534 "num_queues": 4, 00:27:50.534 "bdev_name": "Malloc3" 00:27:50.534 } 00:27:50.534 ]' 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:50.534 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:27:50.791 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:27:50.791 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:27:50.791 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:27:50.791 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:27:50.791 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:27:50.791 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:27:50.791 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:27:50.791 16:42:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:27:50.791 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:27:50.791 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:50.791 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:27:50.791 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:27:50.791 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:27:51.050 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:27:51.050 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:27:51.050 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:27:51.050 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:27:51.050 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:27:51.050 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:27:51.050 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:27:51.050 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:51.050 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:27:51.050 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:27:51.050 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:27:51.050 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:27:51.050 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:51.309 [2024-10-17 16:42:27.464901] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:27:51.309 [2024-10-17 16:42:27.503191] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:27:51.309 [2024-10-17 16:42:27.504228] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:27:51.309 [2024-10-17 16:42:27.512758] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:27:51.309 [2024-10-17 16:42:27.513065] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:27:51.309 [2024-10-17 16:42:27.513082] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:51.309 [2024-10-17 16:42:27.528824] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:27:51.309 [2024-10-17 16:42:27.559100] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:27:51.309 [2024-10-17 16:42:27.560183] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:27:51.309 [2024-10-17 16:42:27.568745] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:27:51.309 [2024-10-17 16:42:27.569040] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:27:51.309 [2024-10-17 16:42:27.569059] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.309 16:42:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:51.309 [2024-10-17 16:42:27.584842] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:27:51.568 [2024-10-17 16:42:27.621116] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:27:51.568 [2024-10-17 16:42:27.622172] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:27:51.568 [2024-10-17 16:42:27.631749] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:27:51.568 [2024-10-17 16:42:27.632027] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:27:51.568 [2024-10-17 16:42:27.632042] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:27:51.568 16:42:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.568 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:51.568 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:27:51.568 16:42:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.568 16:42:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:51.568 [2024-10-17 16:42:27.647842] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:27:51.568 [2024-10-17 16:42:27.687733] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:27:51.568 [2024-10-17 16:42:27.688555] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:27:51.568 [2024-10-17 16:42:27.695742] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:27:51.568 [2024-10-17 16:42:27.696048] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:27:51.568 [2024-10-17 16:42:27.696061] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:27:51.568 16:42:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.568 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:27:51.827 [2024-10-17 16:42:27.903836] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:27:51.827 [2024-10-17 16:42:27.911718] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:27:51.827 [2024-10-17 16:42:27.911764] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:27:51.827 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:27:51.827 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:51.827 16:42:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:51.827 16:42:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.827 16:42:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:52.394 16:42:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.394 16:42:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:52.394 16:42:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:52.394 16:42:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.395 16:42:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:52.961 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.961 16:42:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:52.961 16:42:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:27:52.961 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.961 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:53.220 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.220 16:42:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:27:53.220 16:42:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:27:53.220 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.220 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:27:53.788 ************************************ 00:27:53.788 END TEST test_create_multi_ublk 00:27:53.788 ************************************ 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:27:53.788 00:27:53.788 real 0m4.873s 00:27:53.788 user 0m1.018s 00:27:53.788 sys 0m0.243s 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:53.788 16:42:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:27:53.788 16:42:29 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:53.788 16:42:29 ublk -- ublk/ublk.sh@147 -- # cleanup 00:27:53.788 16:42:29 ublk -- ublk/ublk.sh@130 -- # killprocess 72575 00:27:53.788 16:42:29 ublk -- common/autotest_common.sh@950 -- # '[' -z 72575 ']' 00:27:53.788 16:42:29 ublk -- common/autotest_common.sh@954 -- # kill -0 72575 00:27:53.788 16:42:29 ublk -- common/autotest_common.sh@955 -- # uname 00:27:53.788 16:42:29 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:53.788 16:42:29 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72575 00:27:53.788 killing process with pid 72575 00:27:53.788 16:42:30 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:53.788 16:42:30 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:53.788 16:42:30 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72575' 00:27:53.788 16:42:30 ublk -- common/autotest_common.sh@969 -- # kill 72575 00:27:53.788 16:42:30 ublk -- common/autotest_common.sh@974 -- # wait 72575 00:27:55.167 [2024-10-17 16:42:31.265338] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:27:55.167 [2024-10-17 16:42:31.265402] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:27:56.550 ************************************ 00:27:56.550 END TEST ublk 00:27:56.550 ************************************ 00:27:56.550 00:27:56.550 real 0m31.732s 00:27:56.550 user 0m45.630s 00:27:56.550 sys 0m10.637s 00:27:56.550 16:42:32 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:56.550 16:42:32 ublk -- common/autotest_common.sh@10 -- # set +x 00:27:56.550 16:42:32 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:27:56.550 16:42:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:56.550 16:42:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:56.550 16:42:32 -- common/autotest_common.sh@10 -- # set +x 00:27:56.550 ************************************ 00:27:56.550 START TEST ublk_recovery 00:27:56.550 ************************************ 00:27:56.550 16:42:32 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:27:56.550 * Looking for test storage... 00:27:56.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:27:56.550 16:42:32 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:56.550 16:42:32 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:27:56.550 16:42:32 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:56.810 16:42:32 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.810 16:42:32 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:27:56.810 16:42:32 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.810 16:42:32 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:56.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.810 --rc genhtml_branch_coverage=1 00:27:56.810 --rc genhtml_function_coverage=1 00:27:56.810 --rc genhtml_legend=1 00:27:56.810 --rc geninfo_all_blocks=1 00:27:56.810 --rc geninfo_unexecuted_blocks=1 00:27:56.810 00:27:56.810 ' 00:27:56.810 16:42:32 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:56.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.810 --rc genhtml_branch_coverage=1 00:27:56.810 --rc genhtml_function_coverage=1 00:27:56.810 --rc genhtml_legend=1 00:27:56.810 --rc geninfo_all_blocks=1 00:27:56.810 --rc geninfo_unexecuted_blocks=1 00:27:56.810 00:27:56.810 ' 00:27:56.810 16:42:32 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:56.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.810 --rc genhtml_branch_coverage=1 00:27:56.810 --rc genhtml_function_coverage=1 00:27:56.810 --rc genhtml_legend=1 00:27:56.810 --rc geninfo_all_blocks=1 00:27:56.810 --rc geninfo_unexecuted_blocks=1 00:27:56.810 00:27:56.810 ' 00:27:56.810 16:42:32 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:56.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.810 --rc genhtml_branch_coverage=1 00:27:56.810 --rc genhtml_function_coverage=1 00:27:56.810 --rc genhtml_legend=1 00:27:56.810 --rc geninfo_all_blocks=1 00:27:56.810 --rc geninfo_unexecuted_blocks=1 00:27:56.810 00:27:56.810 ' 00:27:56.810 16:42:32 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:27:56.810 16:42:32 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:27:56.810 16:42:32 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:27:56.810 16:42:32 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:27:56.810 16:42:32 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:27:56.810 16:42:32 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:27:56.810 16:42:32 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:27:56.810 16:42:32 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:27:56.810 16:42:32 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:27:56.810 16:42:32 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:27:56.810 16:42:32 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:27:56.810 16:42:32 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73006 00:27:56.810 16:42:32 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:56.810 16:42:32 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73006 00:27:56.810 16:42:32 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73006 ']' 00:27:56.810 16:42:32 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.810 16:42:32 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:56.810 16:42:32 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.810 16:42:32 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:56.810 16:42:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:27:56.810 [2024-10-17 16:42:33.059797] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:27:56.810 [2024-10-17 16:42:33.060123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73006 ] 00:27:57.069 [2024-10-17 16:42:33.237525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:57.327 [2024-10-17 16:42:33.366257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.327 [2024-10-17 16:42:33.366288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.265 16:42:34 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:58.265 16:42:34 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:27:58.265 16:42:34 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:27:58.265 16:42:34 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.265 16:42:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:27:58.265 [2024-10-17 16:42:34.331797] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:27:58.265 [2024-10-17 16:42:34.334787] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:27:58.265 16:42:34 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.265 16:42:34 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:27:58.265 16:42:34 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.265 16:42:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:27:58.265 malloc0 00:27:58.265 16:42:34 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.265 16:42:34 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:27:58.265 16:42:34 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.265 16:42:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:27:58.265 [2024-10-17 16:42:34.506035] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:27:58.265 [2024-10-17 16:42:34.506192] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:27:58.265 [2024-10-17 16:42:34.506208] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:27:58.265 [2024-10-17 16:42:34.506218] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:27:58.265 [2024-10-17 16:42:34.514871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:27:58.265 [2024-10-17 16:42:34.514908] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:27:58.265 [2024-10-17 16:42:34.521755] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:27:58.265 [2024-10-17 16:42:34.521939] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:27:58.265 [2024-10-17 16:42:34.531828] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:27:58.265 1 00:27:58.265 16:42:34 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.265 16:42:34 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:27:59.643 16:42:35 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73048 00:27:59.643 16:42:35 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:27:59.643 16:42:35 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:27:59.643 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:27:59.643 fio-3.35 00:27:59.643 Starting 1 process 00:28:04.910 16:42:40 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73006 00:28:04.910 16:42:40 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:28:10.177 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73006 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:28:10.177 16:42:45 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73162 00:28:10.177 16:42:45 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:28:10.177 16:42:45 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:10.177 16:42:45 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73162 00:28:10.177 16:42:45 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 73162 ']' 00:28:10.177 16:42:45 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.177 16:42:45 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:10.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.177 16:42:45 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.177 16:42:45 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:10.177 16:42:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.177 [2024-10-17 16:42:45.677514] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:28:10.177 [2024-10-17 16:42:45.677658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73162 ] 00:28:10.177 [2024-10-17 16:42:45.851521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:10.177 [2024-10-17 16:42:45.981200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.177 [2024-10-17 16:42:45.981233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.745 16:42:46 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:10.745 16:42:46 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:28:10.745 16:42:46 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:28:10.745 16:42:46 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.745 16:42:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:10.745 [2024-10-17 16:42:46.948725] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:28:10.745 [2024-10-17 16:42:46.951829] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:28:10.745 16:42:46 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.745 16:42:46 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:28:10.745 16:42:46 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.745 16:42:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.004 malloc0 00:28:11.004 16:42:47 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.004 16:42:47 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:28:11.004 16:42:47 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.004 16:42:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:28:11.004 [2024-10-17 16:42:47.118913] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:28:11.004 [2024-10-17 16:42:47.118967] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:28:11.004 [2024-10-17 16:42:47.118980] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:28:11.004 [2024-10-17 16:42:47.126783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:28:11.004 [2024-10-17 16:42:47.126815] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:28:11.004 [2024-10-17 16:42:47.126827] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:28:11.004 [2024-10-17 16:42:47.126926] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:28:11.004 1 00:28:11.004 16:42:47 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.004 16:42:47 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73048 00:28:11.004 [2024-10-17 16:42:47.134753] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:28:11.004 [2024-10-17 16:42:47.138937] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:28:11.004 [2024-10-17 16:42:47.144980] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:28:11.004 [2024-10-17 16:42:47.145008] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:29:07.234 00:29:07.234 fio_test: (groupid=0, jobs=1): err= 0: pid=73051: Thu Oct 17 16:43:35 2024 00:29:07.234 read: IOPS=21.0k, BW=82.1MiB/s (86.1MB/s)(4925MiB/60002msec) 00:29:07.234 slat (nsec): min=1939, max=840505, avg=7736.47, stdev=2859.96 00:29:07.234 clat (usec): min=1141, max=6601.4k, avg=3012.83, stdev=48093.60 00:29:07.234 lat (usec): min=1149, max=6601.4k, avg=3020.56, stdev=48093.62 00:29:07.234 clat percentiles (usec): 00:29:07.234 | 1.00th=[ 2024], 5.00th=[ 2245], 10.00th=[ 2311], 20.00th=[ 2409], 00:29:07.234 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:29:07.234 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 3032], 95.00th=[ 3884], 00:29:07.234 | 99.00th=[ 5145], 99.50th=[ 5669], 99.90th=[ 7111], 99.95th=[ 7701], 00:29:07.234 | 99.99th=[13042] 00:29:07.234 bw ( KiB/s): min=12176, max=101672, per=100.00%, avg=93479.59, stdev=10747.46, samples=107 00:29:07.234 iops : min= 3044, max=25418, avg=23369.91, stdev=2686.87, samples=107 00:29:07.234 write: IOPS=21.0k, BW=82.0MiB/s (86.0MB/s)(4921MiB/60002msec); 0 zone resets 00:29:07.234 slat (nsec): min=1963, max=453613, avg=7724.69, stdev=2817.93 00:29:07.234 clat (usec): min=1236, max=6601.2k, avg=3064.37, stdev=45911.10 00:29:07.234 lat (usec): min=1244, max=6601.2k, avg=3072.10, stdev=45911.12 00:29:07.234 clat percentiles (usec): 00:29:07.234 | 1.00th=[ 2040], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2474], 00:29:07.234 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2704], 00:29:07.234 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 3064], 95.00th=[ 3851], 00:29:07.234 | 99.00th=[ 5145], 99.50th=[ 5735], 99.90th=[ 7177], 99.95th=[ 7767], 00:29:07.234 | 99.99th=[12911] 00:29:07.234 bw ( KiB/s): min=11968, max=102536, per=100.00%, avg=93394.34, stdev=10713.22, samples=107 00:29:07.234 iops : min= 2992, max=25634, avg=23348.58, stdev=2678.30, samples=107 00:29:07.234 lat (msec) : 2=0.76%, 4=94.83%, 10=4.40%, 20=0.01%, >=2000=0.01% 00:29:07.234 cpu : usr=12.55%, sys=32.76%, ctx=107812, majf=0, minf=13 00:29:07.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:29:07.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:07.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:07.234 issued rwts: total=1260803,1259673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:07.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:07.234 00:29:07.234 Run status group 0 (all jobs): 00:29:07.234 READ: bw=82.1MiB/s (86.1MB/s), 82.1MiB/s-82.1MiB/s (86.1MB/s-86.1MB/s), io=4925MiB (5164MB), run=60002-60002msec 00:29:07.234 WRITE: bw=82.0MiB/s (86.0MB/s), 82.0MiB/s-82.0MiB/s (86.0MB/s-86.0MB/s), io=4921MiB (5160MB), run=60002-60002msec 00:29:07.234 00:29:07.234 Disk stats (read/write): 00:29:07.234 ublkb1: ios=1258184/1257049, merge=0/0, ticks=3672454/3600909, in_queue=7273363, util=99.93% 00:29:07.234 16:43:35 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:29:07.234 16:43:35 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.234 16:43:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.234 [2024-10-17 16:43:35.820558] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:29:07.234 [2024-10-17 16:43:35.868910] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:29:07.234 [2024-10-17 16:43:35.869092] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:29:07.234 [2024-10-17 16:43:35.876783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:29:07.234 [2024-10-17 16:43:35.877033] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:29:07.234 [2024-10-17 16:43:35.877088] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:29:07.234 16:43:35 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.234 16:43:35 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:29:07.234 16:43:35 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.234 16:43:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.234 [2024-10-17 16:43:35.891886] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:29:07.234 [2024-10-17 16:43:35.899732] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:29:07.234 [2024-10-17 16:43:35.899779] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:29:07.234 16:43:35 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.234 16:43:35 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:29:07.234 16:43:35 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:29:07.234 16:43:35 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73162 00:29:07.234 16:43:35 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 73162 ']' 00:29:07.234 16:43:35 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 73162 00:29:07.234 16:43:35 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:29:07.234 16:43:35 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:07.234 16:43:35 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73162 00:29:07.235 killing process with pid 73162 00:29:07.235 16:43:35 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:07.235 16:43:35 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:07.235 16:43:35 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73162' 00:29:07.235 16:43:35 ublk_recovery -- common/autotest_common.sh@969 -- # kill 73162 00:29:07.235 16:43:35 ublk_recovery -- common/autotest_common.sh@974 -- # wait 73162 00:29:07.235 [2024-10-17 16:43:37.653404] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:29:07.235 [2024-10-17 16:43:37.653672] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:29:07.235 00:29:07.235 real 1m6.483s 00:29:07.235 user 1m49.911s 00:29:07.235 sys 0m39.456s 00:29:07.235 16:43:39 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:07.235 ************************************ 00:29:07.235 END TEST ublk_recovery 00:29:07.235 ************************************ 00:29:07.235 16:43:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:29:07.235 16:43:39 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:29:07.235 16:43:39 -- spdk/autotest.sh@256 -- # timing_exit lib 00:29:07.235 16:43:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:07.235 16:43:39 -- common/autotest_common.sh@10 -- # set +x 00:29:07.235 16:43:39 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:29:07.235 16:43:39 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:29:07.235 16:43:39 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:29:07.235 16:43:39 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:29:07.235 16:43:39 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:07.235 16:43:39 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:07.235 16:43:39 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:07.235 16:43:39 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:29:07.235 16:43:39 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:29:07.235 16:43:39 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:29:07.235 16:43:39 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:29:07.235 16:43:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:07.235 16:43:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:07.235 16:43:39 -- common/autotest_common.sh@10 -- # set +x 00:29:07.235 ************************************ 00:29:07.235 START TEST ftl 00:29:07.235 ************************************ 00:29:07.235 16:43:39 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:29:07.235 * Looking for test storage... 00:29:07.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:07.235 16:43:39 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:07.235 16:43:39 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:29:07.235 16:43:39 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:07.235 16:43:39 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:07.235 16:43:39 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.235 16:43:39 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.235 16:43:39 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.235 16:43:39 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.235 16:43:39 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.235 16:43:39 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.235 16:43:39 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.235 16:43:39 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.235 16:43:39 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.235 16:43:39 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.235 16:43:39 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.235 16:43:39 ftl -- scripts/common.sh@344 -- # case "$op" in 00:29:07.235 16:43:39 ftl -- scripts/common.sh@345 -- # : 1 00:29:07.235 16:43:39 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.235 16:43:39 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.235 16:43:39 ftl -- scripts/common.sh@365 -- # decimal 1 00:29:07.235 16:43:39 ftl -- scripts/common.sh@353 -- # local d=1 00:29:07.235 16:43:39 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.235 16:43:39 ftl -- scripts/common.sh@355 -- # echo 1 00:29:07.235 16:43:39 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.235 16:43:39 ftl -- scripts/common.sh@366 -- # decimal 2 00:29:07.235 16:43:39 ftl -- scripts/common.sh@353 -- # local d=2 00:29:07.235 16:43:39 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.235 16:43:39 ftl -- scripts/common.sh@355 -- # echo 2 00:29:07.235 16:43:39 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.235 16:43:39 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.235 16:43:39 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.235 16:43:39 ftl -- scripts/common.sh@368 -- # return 0 00:29:07.235 16:43:39 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.235 16:43:39 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:07.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.235 --rc genhtml_branch_coverage=1 00:29:07.235 --rc genhtml_function_coverage=1 00:29:07.235 --rc genhtml_legend=1 00:29:07.235 --rc geninfo_all_blocks=1 00:29:07.235 --rc geninfo_unexecuted_blocks=1 00:29:07.235 00:29:07.235 ' 00:29:07.235 16:43:39 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:07.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.235 --rc genhtml_branch_coverage=1 00:29:07.235 --rc genhtml_function_coverage=1 00:29:07.235 --rc genhtml_legend=1 00:29:07.235 --rc geninfo_all_blocks=1 00:29:07.235 --rc geninfo_unexecuted_blocks=1 00:29:07.235 00:29:07.235 ' 00:29:07.235 16:43:39 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:07.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.235 --rc genhtml_branch_coverage=1 00:29:07.235 --rc genhtml_function_coverage=1 00:29:07.235 --rc genhtml_legend=1 00:29:07.235 --rc geninfo_all_blocks=1 00:29:07.235 --rc geninfo_unexecuted_blocks=1 00:29:07.235 00:29:07.235 ' 00:29:07.235 16:43:39 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:07.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.235 --rc genhtml_branch_coverage=1 00:29:07.235 --rc genhtml_function_coverage=1 00:29:07.235 --rc genhtml_legend=1 00:29:07.235 --rc geninfo_all_blocks=1 00:29:07.235 --rc geninfo_unexecuted_blocks=1 00:29:07.235 00:29:07.235 ' 00:29:07.235 16:43:39 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:07.235 16:43:39 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:29:07.235 16:43:39 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:07.235 16:43:39 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:07.235 16:43:39 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:07.235 16:43:39 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:07.235 16:43:39 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:07.235 16:43:39 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:07.235 16:43:39 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:07.235 16:43:39 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:07.235 16:43:39 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:07.235 16:43:39 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:07.235 16:43:39 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:07.235 16:43:39 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:07.235 16:43:39 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:07.235 16:43:39 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:07.235 16:43:39 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:07.235 16:43:39 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:07.235 16:43:39 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:07.235 16:43:39 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:07.235 16:43:39 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:07.235 16:43:39 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:07.235 16:43:39 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:07.235 16:43:39 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:07.235 16:43:39 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:07.235 16:43:39 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:07.235 16:43:39 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:07.235 16:43:39 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:07.235 16:43:39 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:07.235 16:43:39 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:07.235 16:43:39 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:29:07.235 16:43:39 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:29:07.235 16:43:39 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:29:07.235 16:43:39 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:29:07.235 16:43:39 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:07.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:07.235 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:07.235 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:07.235 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:07.235 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:07.235 16:43:40 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:29:07.235 16:43:40 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=73972 00:29:07.235 16:43:40 ftl -- ftl/ftl.sh@38 -- # waitforlisten 73972 00:29:07.235 16:43:40 ftl -- common/autotest_common.sh@831 -- # '[' -z 73972 ']' 00:29:07.236 16:43:40 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.236 16:43:40 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:07.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.236 16:43:40 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.236 16:43:40 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:07.236 16:43:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:07.236 [2024-10-17 16:43:40.465692] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:29:07.236 [2024-10-17 16:43:40.465835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73972 ] 00:29:07.236 [2024-10-17 16:43:40.642855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.236 [2024-10-17 16:43:40.768097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.236 16:43:41 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:07.236 16:43:41 ftl -- common/autotest_common.sh@864 -- # return 0 00:29:07.236 16:43:41 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:29:07.236 16:43:41 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:29:07.236 16:43:42 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:07.236 16:43:42 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:29:07.236 16:43:43 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:29:07.236 16:43:43 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:29:07.236 16:43:43 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:29:07.236 16:43:43 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:29:07.236 16:43:43 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:29:07.236 16:43:43 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:29:07.236 16:43:43 ftl -- ftl/ftl.sh@50 -- # break 00:29:07.236 16:43:43 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:29:07.236 16:43:43 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:29:07.236 16:43:43 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:29:07.236 16:43:43 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:29:07.493 16:43:43 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:29:07.493 16:43:43 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:29:07.493 16:43:43 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:29:07.493 16:43:43 ftl -- ftl/ftl.sh@63 -- # break 00:29:07.493 16:43:43 ftl -- ftl/ftl.sh@66 -- # killprocess 73972 00:29:07.493 16:43:43 ftl -- common/autotest_common.sh@950 -- # '[' -z 73972 ']' 00:29:07.493 16:43:43 ftl -- common/autotest_common.sh@954 -- # kill -0 73972 00:29:07.493 16:43:43 ftl -- common/autotest_common.sh@955 -- # uname 00:29:07.493 16:43:43 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:07.493 16:43:43 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73972 00:29:07.493 16:43:43 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:07.493 16:43:43 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:07.493 killing process with pid 73972 00:29:07.493 16:43:43 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73972' 00:29:07.493 16:43:43 ftl -- common/autotest_common.sh@969 -- # kill 73972 00:29:07.493 16:43:43 ftl -- common/autotest_common.sh@974 -- # wait 73972 00:29:10.094 16:43:46 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:29:10.094 16:43:46 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:29:10.094 16:43:46 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:29:10.094 16:43:46 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:10.094 16:43:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:10.094 ************************************ 00:29:10.094 START TEST ftl_fio_basic 00:29:10.094 ************************************ 00:29:10.094 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:29:10.094 * Looking for test storage... 00:29:10.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:10.094 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:10.094 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:10.094 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:29:10.353 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:10.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.354 --rc genhtml_branch_coverage=1 00:29:10.354 --rc genhtml_function_coverage=1 00:29:10.354 --rc genhtml_legend=1 00:29:10.354 --rc geninfo_all_blocks=1 00:29:10.354 --rc geninfo_unexecuted_blocks=1 00:29:10.354 00:29:10.354 ' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:10.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.354 --rc genhtml_branch_coverage=1 00:29:10.354 --rc genhtml_function_coverage=1 00:29:10.354 --rc genhtml_legend=1 00:29:10.354 --rc geninfo_all_blocks=1 00:29:10.354 --rc geninfo_unexecuted_blocks=1 00:29:10.354 00:29:10.354 ' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:10.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.354 --rc genhtml_branch_coverage=1 00:29:10.354 --rc genhtml_function_coverage=1 00:29:10.354 --rc genhtml_legend=1 00:29:10.354 --rc geninfo_all_blocks=1 00:29:10.354 --rc geninfo_unexecuted_blocks=1 00:29:10.354 00:29:10.354 ' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:10.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:10.354 --rc genhtml_branch_coverage=1 00:29:10.354 --rc genhtml_function_coverage=1 00:29:10.354 --rc genhtml_legend=1 00:29:10.354 --rc geninfo_all_blocks=1 00:29:10.354 --rc geninfo_unexecuted_blocks=1 00:29:10.354 00:29:10.354 ' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74122 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74122 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 74122 ']' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:29:10.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:10.354 16:43:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:29:10.354 [2024-10-17 16:43:46.548538] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:29:10.354 [2024-10-17 16:43:46.548683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74122 ] 00:29:10.614 [2024-10-17 16:43:46.725261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:10.614 [2024-10-17 16:43:46.855156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.614 [2024-10-17 16:43:46.855241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.614 [2024-10-17 16:43:46.855273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.548 16:43:47 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:11.548 16:43:47 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:29:11.548 16:43:47 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:11.548 16:43:47 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:29:11.548 16:43:47 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:11.548 16:43:47 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:29:11.548 16:43:47 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:29:11.548 16:43:47 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:11.808 16:43:48 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:11.808 16:43:48 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:29:11.808 16:43:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:11.808 16:43:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:29:11.808 16:43:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:11.808 16:43:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:29:11.808 16:43:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:29:11.808 16:43:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:12.066 16:43:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:12.066 { 00:29:12.066 "name": "nvme0n1", 00:29:12.066 "aliases": [ 00:29:12.066 "66b18c0d-7f75-478f-9bb9-ede390024132" 00:29:12.066 ], 00:29:12.066 "product_name": "NVMe disk", 00:29:12.066 "block_size": 4096, 00:29:12.066 "num_blocks": 1310720, 00:29:12.066 "uuid": "66b18c0d-7f75-478f-9bb9-ede390024132", 00:29:12.066 "numa_id": -1, 00:29:12.066 "assigned_rate_limits": { 00:29:12.066 "rw_ios_per_sec": 0, 00:29:12.066 "rw_mbytes_per_sec": 0, 00:29:12.066 "r_mbytes_per_sec": 0, 00:29:12.066 "w_mbytes_per_sec": 0 00:29:12.066 }, 00:29:12.066 "claimed": false, 00:29:12.066 "zoned": false, 00:29:12.066 "supported_io_types": { 00:29:12.066 "read": true, 00:29:12.066 "write": true, 00:29:12.066 "unmap": true, 00:29:12.066 "flush": true, 00:29:12.066 "reset": true, 00:29:12.066 "nvme_admin": true, 00:29:12.066 "nvme_io": true, 00:29:12.066 "nvme_io_md": false, 00:29:12.066 "write_zeroes": true, 00:29:12.066 "zcopy": false, 00:29:12.066 "get_zone_info": false, 00:29:12.066 "zone_management": false, 00:29:12.066 "zone_append": false, 00:29:12.066 "compare": true, 00:29:12.066 "compare_and_write": false, 00:29:12.066 "abort": true, 00:29:12.066 "seek_hole": false, 00:29:12.066 "seek_data": false, 00:29:12.066 "copy": true, 00:29:12.066 "nvme_iov_md": false 00:29:12.066 }, 00:29:12.066 "driver_specific": { 00:29:12.066 "nvme": [ 00:29:12.066 { 00:29:12.066 "pci_address": "0000:00:11.0", 00:29:12.066 "trid": { 00:29:12.066 "trtype": "PCIe", 00:29:12.066 "traddr": "0000:00:11.0" 00:29:12.066 }, 00:29:12.066 "ctrlr_data": { 00:29:12.066 "cntlid": 0, 00:29:12.066 "vendor_id": "0x1b36", 00:29:12.066 "model_number": "QEMU NVMe Ctrl", 00:29:12.066 "serial_number": "12341", 00:29:12.066 "firmware_revision": "8.0.0", 00:29:12.066 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:12.066 "oacs": { 00:29:12.066 "security": 0, 00:29:12.066 "format": 1, 00:29:12.066 "firmware": 0, 00:29:12.066 "ns_manage": 1 00:29:12.066 }, 00:29:12.066 "multi_ctrlr": false, 00:29:12.066 "ana_reporting": false 00:29:12.066 }, 00:29:12.066 "vs": { 00:29:12.066 "nvme_version": "1.4" 00:29:12.066 }, 00:29:12.066 "ns_data": { 00:29:12.066 "id": 1, 00:29:12.066 "can_share": false 00:29:12.066 } 00:29:12.066 } 00:29:12.066 ], 00:29:12.066 "mp_policy": "active_passive" 00:29:12.066 } 00:29:12.066 } 00:29:12.066 ]' 00:29:12.066 16:43:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:12.324 16:43:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:29:12.324 16:43:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:12.324 16:43:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:29:12.325 16:43:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:29:12.325 16:43:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:29:12.325 16:43:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:29:12.325 16:43:48 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:12.325 16:43:48 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:29:12.325 16:43:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:12.325 16:43:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:12.582 16:43:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:29:12.582 16:43:48 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:12.841 16:43:48 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=d78fa835-e2ce-45f6-9b70-f04d4a9ac1b7 00:29:12.841 16:43:48 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d78fa835-e2ce-45f6-9b70-f04d4a9ac1b7 00:29:13.100 16:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=93f976da-6ccf-4050-bbab-30879062337c 00:29:13.100 16:43:49 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 93f976da-6ccf-4050-bbab-30879062337c 00:29:13.100 16:43:49 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:29:13.100 16:43:49 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:13.100 16:43:49 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=93f976da-6ccf-4050-bbab-30879062337c 00:29:13.100 16:43:49 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:29:13.100 16:43:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 93f976da-6ccf-4050-bbab-30879062337c 00:29:13.100 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=93f976da-6ccf-4050-bbab-30879062337c 00:29:13.100 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:13.100 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:29:13.100 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:29:13.100 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93f976da-6ccf-4050-bbab-30879062337c 00:29:13.100 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:13.100 { 00:29:13.100 "name": "93f976da-6ccf-4050-bbab-30879062337c", 00:29:13.100 "aliases": [ 00:29:13.100 "lvs/nvme0n1p0" 00:29:13.100 ], 00:29:13.100 "product_name": "Logical Volume", 00:29:13.100 "block_size": 4096, 00:29:13.100 "num_blocks": 26476544, 00:29:13.100 "uuid": "93f976da-6ccf-4050-bbab-30879062337c", 00:29:13.100 "assigned_rate_limits": { 00:29:13.100 "rw_ios_per_sec": 0, 00:29:13.100 "rw_mbytes_per_sec": 0, 00:29:13.100 "r_mbytes_per_sec": 0, 00:29:13.100 "w_mbytes_per_sec": 0 00:29:13.100 }, 00:29:13.100 "claimed": false, 00:29:13.100 "zoned": false, 00:29:13.100 "supported_io_types": { 00:29:13.100 "read": true, 00:29:13.100 "write": true, 00:29:13.100 "unmap": true, 00:29:13.100 "flush": false, 00:29:13.100 "reset": true, 00:29:13.100 "nvme_admin": false, 00:29:13.100 "nvme_io": false, 00:29:13.100 "nvme_io_md": false, 00:29:13.100 "write_zeroes": true, 00:29:13.100 "zcopy": false, 00:29:13.100 "get_zone_info": false, 00:29:13.100 "zone_management": false, 00:29:13.100 "zone_append": false, 00:29:13.100 "compare": false, 00:29:13.100 "compare_and_write": false, 00:29:13.100 "abort": false, 00:29:13.100 "seek_hole": true, 00:29:13.100 "seek_data": true, 00:29:13.100 "copy": false, 00:29:13.100 "nvme_iov_md": false 00:29:13.100 }, 00:29:13.100 "driver_specific": { 00:29:13.100 "lvol": { 00:29:13.100 "lvol_store_uuid": "d78fa835-e2ce-45f6-9b70-f04d4a9ac1b7", 00:29:13.100 "base_bdev": "nvme0n1", 00:29:13.100 "thin_provision": true, 00:29:13.100 "num_allocated_clusters": 0, 00:29:13.100 "snapshot": false, 00:29:13.100 "clone": false, 00:29:13.100 "esnap_clone": false 00:29:13.100 } 00:29:13.100 } 00:29:13.100 } 00:29:13.100 ]' 00:29:13.359 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:13.359 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:29:13.359 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:13.359 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:13.359 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:13.359 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:29:13.359 16:43:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:29:13.359 16:43:49 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:29:13.359 16:43:49 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:13.926 16:43:49 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:13.926 16:43:49 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:13.926 16:43:49 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 93f976da-6ccf-4050-bbab-30879062337c 00:29:13.926 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=93f976da-6ccf-4050-bbab-30879062337c 00:29:13.926 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:13.926 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:29:13.926 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:29:13.926 16:43:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93f976da-6ccf-4050-bbab-30879062337c 00:29:13.926 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:13.926 { 00:29:13.926 "name": "93f976da-6ccf-4050-bbab-30879062337c", 00:29:13.926 "aliases": [ 00:29:13.926 "lvs/nvme0n1p0" 00:29:13.926 ], 00:29:13.926 "product_name": "Logical Volume", 00:29:13.926 "block_size": 4096, 00:29:13.926 "num_blocks": 26476544, 00:29:13.926 "uuid": "93f976da-6ccf-4050-bbab-30879062337c", 00:29:13.926 "assigned_rate_limits": { 00:29:13.926 "rw_ios_per_sec": 0, 00:29:13.926 "rw_mbytes_per_sec": 0, 00:29:13.926 "r_mbytes_per_sec": 0, 00:29:13.926 "w_mbytes_per_sec": 0 00:29:13.926 }, 00:29:13.926 "claimed": false, 00:29:13.926 "zoned": false, 00:29:13.926 "supported_io_types": { 00:29:13.926 "read": true, 00:29:13.926 "write": true, 00:29:13.926 "unmap": true, 00:29:13.926 "flush": false, 00:29:13.926 "reset": true, 00:29:13.926 "nvme_admin": false, 00:29:13.926 "nvme_io": false, 00:29:13.926 "nvme_io_md": false, 00:29:13.926 "write_zeroes": true, 00:29:13.926 "zcopy": false, 00:29:13.926 "get_zone_info": false, 00:29:13.926 "zone_management": false, 00:29:13.926 "zone_append": false, 00:29:13.926 "compare": false, 00:29:13.926 "compare_and_write": false, 00:29:13.926 "abort": false, 00:29:13.926 "seek_hole": true, 00:29:13.926 "seek_data": true, 00:29:13.926 "copy": false, 00:29:13.926 "nvme_iov_md": false 00:29:13.926 }, 00:29:13.926 "driver_specific": { 00:29:13.926 "lvol": { 00:29:13.926 "lvol_store_uuid": "d78fa835-e2ce-45f6-9b70-f04d4a9ac1b7", 00:29:13.926 "base_bdev": "nvme0n1", 00:29:13.926 "thin_provision": true, 00:29:13.926 "num_allocated_clusters": 0, 00:29:13.926 "snapshot": false, 00:29:13.926 "clone": false, 00:29:13.926 "esnap_clone": false 00:29:13.926 } 00:29:13.926 } 00:29:13.926 } 00:29:13.926 ]' 00:29:13.926 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:13.926 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:29:13.926 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:14.185 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:14.185 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:14.185 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:29:14.185 16:43:50 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:29:14.185 16:43:50 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:14.444 16:43:50 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:29:14.444 16:43:50 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:29:14.444 16:43:50 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:29:14.444 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:29:14.444 16:43:50 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 93f976da-6ccf-4050-bbab-30879062337c 00:29:14.444 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=93f976da-6ccf-4050-bbab-30879062337c 00:29:14.444 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:14.444 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:29:14.444 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:29:14.444 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93f976da-6ccf-4050-bbab-30879062337c 00:29:14.703 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:14.703 { 00:29:14.703 "name": "93f976da-6ccf-4050-bbab-30879062337c", 00:29:14.703 "aliases": [ 00:29:14.703 "lvs/nvme0n1p0" 00:29:14.703 ], 00:29:14.703 "product_name": "Logical Volume", 00:29:14.703 "block_size": 4096, 00:29:14.703 "num_blocks": 26476544, 00:29:14.703 "uuid": "93f976da-6ccf-4050-bbab-30879062337c", 00:29:14.703 "assigned_rate_limits": { 00:29:14.703 "rw_ios_per_sec": 0, 00:29:14.703 "rw_mbytes_per_sec": 0, 00:29:14.703 "r_mbytes_per_sec": 0, 00:29:14.703 "w_mbytes_per_sec": 0 00:29:14.703 }, 00:29:14.703 "claimed": false, 00:29:14.703 "zoned": false, 00:29:14.703 "supported_io_types": { 00:29:14.703 "read": true, 00:29:14.703 "write": true, 00:29:14.703 "unmap": true, 00:29:14.703 "flush": false, 00:29:14.703 "reset": true, 00:29:14.703 "nvme_admin": false, 00:29:14.703 "nvme_io": false, 00:29:14.703 "nvme_io_md": false, 00:29:14.703 "write_zeroes": true, 00:29:14.703 "zcopy": false, 00:29:14.703 "get_zone_info": false, 00:29:14.703 "zone_management": false, 00:29:14.703 "zone_append": false, 00:29:14.703 "compare": false, 00:29:14.703 "compare_and_write": false, 00:29:14.703 "abort": false, 00:29:14.703 "seek_hole": true, 00:29:14.703 "seek_data": true, 00:29:14.703 "copy": false, 00:29:14.703 "nvme_iov_md": false 00:29:14.703 }, 00:29:14.703 "driver_specific": { 00:29:14.703 "lvol": { 00:29:14.703 "lvol_store_uuid": "d78fa835-e2ce-45f6-9b70-f04d4a9ac1b7", 00:29:14.703 "base_bdev": "nvme0n1", 00:29:14.703 "thin_provision": true, 00:29:14.703 "num_allocated_clusters": 0, 00:29:14.703 "snapshot": false, 00:29:14.703 "clone": false, 00:29:14.703 "esnap_clone": false 00:29:14.703 } 00:29:14.703 } 00:29:14.703 } 00:29:14.703 ]' 00:29:14.703 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:14.703 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:29:14.703 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:14.703 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:14.703 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:14.703 16:43:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:29:14.703 16:43:50 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:29:14.703 16:43:50 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:29:14.703 16:43:50 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 93f976da-6ccf-4050-bbab-30879062337c -c nvc0n1p0 --l2p_dram_limit 60 00:29:14.963 [2024-10-17 16:43:51.188893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.963 [2024-10-17 16:43:51.188956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:14.963 [2024-10-17 16:43:51.188979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:14.963 [2024-10-17 16:43:51.188990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.963 [2024-10-17 16:43:51.189097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.963 [2024-10-17 16:43:51.189114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:14.963 [2024-10-17 16:43:51.189129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:14.963 [2024-10-17 16:43:51.189144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.963 [2024-10-17 16:43:51.189200] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:14.963 [2024-10-17 16:43:51.190397] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:14.963 [2024-10-17 16:43:51.190444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.963 [2024-10-17 16:43:51.190460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:14.963 [2024-10-17 16:43:51.190475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:29:14.963 [2024-10-17 16:43:51.190486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.963 [2024-10-17 16:43:51.190608] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 907264b3-0ff6-4ba0-b9b8-009b4bf6b5fb 00:29:14.963 [2024-10-17 16:43:51.192192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.963 [2024-10-17 16:43:51.192226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:14.963 [2024-10-17 16:43:51.192240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:14.963 [2024-10-17 16:43:51.192257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.963 [2024-10-17 16:43:51.199882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.963 [2024-10-17 16:43:51.199929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:14.963 [2024-10-17 16:43:51.199944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.546 ms 00:29:14.963 [2024-10-17 16:43:51.199958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.963 [2024-10-17 16:43:51.200114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.963 [2024-10-17 16:43:51.200135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:14.963 [2024-10-17 16:43:51.200151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:29:14.963 [2024-10-17 16:43:51.200169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.964 [2024-10-17 16:43:51.200264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.964 [2024-10-17 16:43:51.200281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:14.964 [2024-10-17 16:43:51.200292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:14.964 [2024-10-17 16:43:51.200305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.964 [2024-10-17 16:43:51.200340] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:14.964 [2024-10-17 16:43:51.205630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.964 [2024-10-17 16:43:51.205670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:14.964 [2024-10-17 16:43:51.205689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.303 ms 00:29:14.964 [2024-10-17 16:43:51.205716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.964 [2024-10-17 16:43:51.205775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.964 [2024-10-17 16:43:51.205791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:14.964 [2024-10-17 16:43:51.205805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:14.964 [2024-10-17 16:43:51.205816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.964 [2024-10-17 16:43:51.205880] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:14.964 [2024-10-17 16:43:51.206031] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:14.964 [2024-10-17 16:43:51.206058] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:14.964 [2024-10-17 16:43:51.206073] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:14.964 [2024-10-17 16:43:51.206090] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:14.964 [2024-10-17 16:43:51.206103] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:14.964 [2024-10-17 16:43:51.206118] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:14.964 [2024-10-17 16:43:51.206129] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:14.964 [2024-10-17 16:43:51.206142] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:14.964 [2024-10-17 16:43:51.206152] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:14.964 [2024-10-17 16:43:51.206166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.964 [2024-10-17 16:43:51.206177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:14.964 [2024-10-17 16:43:51.206190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:29:14.964 [2024-10-17 16:43:51.206206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.964 [2024-10-17 16:43:51.206297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.964 [2024-10-17 16:43:51.206308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:14.964 [2024-10-17 16:43:51.206322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:14.964 [2024-10-17 16:43:51.206333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.964 [2024-10-17 16:43:51.206464] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:14.964 [2024-10-17 16:43:51.206482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:14.964 [2024-10-17 16:43:51.206496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:14.964 [2024-10-17 16:43:51.206508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.964 [2024-10-17 16:43:51.206525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:14.964 [2024-10-17 16:43:51.206535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:14.964 [2024-10-17 16:43:51.206548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:14.964 [2024-10-17 16:43:51.206558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:14.964 [2024-10-17 16:43:51.206571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:14.964 [2024-10-17 16:43:51.206581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:14.964 [2024-10-17 16:43:51.206594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:14.964 [2024-10-17 16:43:51.206605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:14.964 [2024-10-17 16:43:51.206618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:14.964 [2024-10-17 16:43:51.206627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:14.964 [2024-10-17 16:43:51.206641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:14.964 [2024-10-17 16:43:51.206650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.964 [2024-10-17 16:43:51.206668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:14.964 [2024-10-17 16:43:51.206678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:14.964 [2024-10-17 16:43:51.206690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.964 [2024-10-17 16:43:51.206713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:14.964 [2024-10-17 16:43:51.206726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:14.964 [2024-10-17 16:43:51.206739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.964 [2024-10-17 16:43:51.206752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:14.964 [2024-10-17 16:43:51.206762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:14.964 [2024-10-17 16:43:51.206774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.964 [2024-10-17 16:43:51.206785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:14.964 [2024-10-17 16:43:51.206797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:14.964 [2024-10-17 16:43:51.206807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.964 [2024-10-17 16:43:51.206819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:14.964 [2024-10-17 16:43:51.206829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:14.964 [2024-10-17 16:43:51.206842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:14.964 [2024-10-17 16:43:51.206852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:14.964 [2024-10-17 16:43:51.206873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:14.964 [2024-10-17 16:43:51.206883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:14.964 [2024-10-17 16:43:51.206896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:14.964 [2024-10-17 16:43:51.206922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:14.964 [2024-10-17 16:43:51.206936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:14.964 [2024-10-17 16:43:51.206946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:14.964 [2024-10-17 16:43:51.206958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:14.964 [2024-10-17 16:43:51.206968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.964 [2024-10-17 16:43:51.206981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:14.964 [2024-10-17 16:43:51.206991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:14.964 [2024-10-17 16:43:51.207005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.964 [2024-10-17 16:43:51.207015] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:14.964 [2024-10-17 16:43:51.207028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:14.964 [2024-10-17 16:43:51.207039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:14.964 [2024-10-17 16:43:51.207053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:14.964 [2024-10-17 16:43:51.207064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:14.964 [2024-10-17 16:43:51.207080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:14.964 [2024-10-17 16:43:51.207090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:14.964 [2024-10-17 16:43:51.207103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:14.964 [2024-10-17 16:43:51.207113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:14.964 [2024-10-17 16:43:51.207126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:14.964 [2024-10-17 16:43:51.207143] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:14.964 [2024-10-17 16:43:51.207159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:14.964 [2024-10-17 16:43:51.207171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:14.964 [2024-10-17 16:43:51.207185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:14.964 [2024-10-17 16:43:51.207198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:14.964 [2024-10-17 16:43:51.207212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:14.964 [2024-10-17 16:43:51.207223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:14.964 [2024-10-17 16:43:51.207237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:14.964 [2024-10-17 16:43:51.207249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:14.964 [2024-10-17 16:43:51.207263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:14.964 [2024-10-17 16:43:51.207274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:14.964 [2024-10-17 16:43:51.207290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:14.964 [2024-10-17 16:43:51.207301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:14.964 [2024-10-17 16:43:51.207316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:14.964 [2024-10-17 16:43:51.207327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:14.964 [2024-10-17 16:43:51.207342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:14.964 [2024-10-17 16:43:51.207353] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:14.964 [2024-10-17 16:43:51.207368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:14.964 [2024-10-17 16:43:51.207380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:14.964 [2024-10-17 16:43:51.207393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:14.965 [2024-10-17 16:43:51.207405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:14.965 [2024-10-17 16:43:51.207420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:14.965 [2024-10-17 16:43:51.207432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:14.965 [2024-10-17 16:43:51.207446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:14.965 [2024-10-17 16:43:51.207460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:29:14.965 [2024-10-17 16:43:51.207473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:14.965 [2024-10-17 16:43:51.207547] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:14.965 [2024-10-17 16:43:51.207566] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:20.255 [2024-10-17 16:43:55.646793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:55.647069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:20.255 [2024-10-17 16:43:55.647161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4446.451 ms 00:29:20.255 [2024-10-17 16:43:55.647209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:55.686843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:55.687097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:20.255 [2024-10-17 16:43:55.687276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.333 ms 00:29:20.255 [2024-10-17 16:43:55.687323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:55.687549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:55.687600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:20.255 [2024-10-17 16:43:55.687723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:20.255 [2024-10-17 16:43:55.687770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:55.745000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:55.745232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:20.255 [2024-10-17 16:43:55.745348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.224 ms 00:29:20.255 [2024-10-17 16:43:55.745404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:55.745499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:55.745657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:20.255 [2024-10-17 16:43:55.745739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:20.255 [2024-10-17 16:43:55.745786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:55.746344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:55.746496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:20.255 [2024-10-17 16:43:55.746600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:29:20.255 [2024-10-17 16:43:55.746652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:55.747038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:55.747181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:20.255 [2024-10-17 16:43:55.747204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:29:20.255 [2024-10-17 16:43:55.747224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:55.769983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:55.770126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:20.255 [2024-10-17 16:43:55.770201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.757 ms 00:29:20.255 [2024-10-17 16:43:55.770241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:55.783399] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:29:20.255 [2024-10-17 16:43:55.800753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:55.801027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:20.255 [2024-10-17 16:43:55.801173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.395 ms 00:29:20.255 [2024-10-17 16:43:55.801215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:55.889646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:55.889923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:20.255 [2024-10-17 16:43:55.890090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.483 ms 00:29:20.255 [2024-10-17 16:43:55.890133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:55.890405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:55.890468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:20.255 [2024-10-17 16:43:55.890548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:29:20.255 [2024-10-17 16:43:55.890586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:55.929593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:55.929786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:20.255 [2024-10-17 16:43:55.929868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.948 ms 00:29:20.255 [2024-10-17 16:43:55.929909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:55.967670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:55.967864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:20.255 [2024-10-17 16:43:55.967894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.710 ms 00:29:20.255 [2024-10-17 16:43:55.967905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:55.968651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:55.968677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:20.255 [2024-10-17 16:43:55.968708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:29:20.255 [2024-10-17 16:43:55.968720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:56.072891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:56.072970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:20.255 [2024-10-17 16:43:56.072996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.259 ms 00:29:20.255 [2024-10-17 16:43:56.073007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:56.113602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:56.113664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:20.255 [2024-10-17 16:43:56.113686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.498 ms 00:29:20.255 [2024-10-17 16:43:56.113711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:56.153491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:56.153555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:20.255 [2024-10-17 16:43:56.153575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.761 ms 00:29:20.255 [2024-10-17 16:43:56.153586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:56.193116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:56.193331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:20.255 [2024-10-17 16:43:56.193363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.531 ms 00:29:20.255 [2024-10-17 16:43:56.193379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:56.193444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:56.193457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:20.255 [2024-10-17 16:43:56.193474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:20.255 [2024-10-17 16:43:56.193485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:56.193651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.255 [2024-10-17 16:43:56.193667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:20.255 [2024-10-17 16:43:56.193681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:29:20.255 [2024-10-17 16:43:56.193691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.255 [2024-10-17 16:43:56.194951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5013.639 ms, result 0 00:29:20.255 { 00:29:20.255 "name": "ftl0", 00:29:20.255 "uuid": "907264b3-0ff6-4ba0-b9b8-009b4bf6b5fb" 00:29:20.255 } 00:29:20.256 16:43:56 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:29:20.256 16:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:29:20.256 16:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:29:20.256 16:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:29:20.256 16:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:29:20.256 16:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:29:20.256 16:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:20.256 16:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:29:20.522 [ 00:29:20.522 { 00:29:20.522 "name": "ftl0", 00:29:20.522 "aliases": [ 00:29:20.522 "907264b3-0ff6-4ba0-b9b8-009b4bf6b5fb" 00:29:20.522 ], 00:29:20.522 "product_name": "FTL disk", 00:29:20.522 "block_size": 4096, 00:29:20.522 "num_blocks": 20971520, 00:29:20.522 "uuid": "907264b3-0ff6-4ba0-b9b8-009b4bf6b5fb", 00:29:20.522 "assigned_rate_limits": { 00:29:20.522 "rw_ios_per_sec": 0, 00:29:20.522 "rw_mbytes_per_sec": 0, 00:29:20.522 "r_mbytes_per_sec": 0, 00:29:20.522 "w_mbytes_per_sec": 0 00:29:20.522 }, 00:29:20.522 "claimed": false, 00:29:20.522 "zoned": false, 00:29:20.522 "supported_io_types": { 00:29:20.522 "read": true, 00:29:20.522 "write": true, 00:29:20.522 "unmap": true, 00:29:20.522 "flush": true, 00:29:20.522 "reset": false, 00:29:20.522 "nvme_admin": false, 00:29:20.522 "nvme_io": false, 00:29:20.522 "nvme_io_md": false, 00:29:20.522 "write_zeroes": true, 00:29:20.522 "zcopy": false, 00:29:20.522 "get_zone_info": false, 00:29:20.522 "zone_management": false, 00:29:20.522 "zone_append": false, 00:29:20.522 "compare": false, 00:29:20.522 "compare_and_write": false, 00:29:20.522 "abort": false, 00:29:20.522 "seek_hole": false, 00:29:20.522 "seek_data": false, 00:29:20.522 "copy": false, 00:29:20.522 "nvme_iov_md": false 00:29:20.522 }, 00:29:20.522 "driver_specific": { 00:29:20.522 "ftl": { 00:29:20.522 "base_bdev": "93f976da-6ccf-4050-bbab-30879062337c", 00:29:20.522 "cache": "nvc0n1p0" 00:29:20.522 } 00:29:20.522 } 00:29:20.522 } 00:29:20.522 ] 00:29:20.522 16:43:56 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:29:20.522 16:43:56 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:29:20.522 16:43:56 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:20.781 16:43:56 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:29:20.781 16:43:56 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:21.049 [2024-10-17 16:43:57.094287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.049 [2024-10-17 16:43:57.094594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:21.049 [2024-10-17 16:43:57.094624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:21.049 [2024-10-17 16:43:57.094640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.049 [2024-10-17 16:43:57.094699] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:21.049 [2024-10-17 16:43:57.099365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.049 [2024-10-17 16:43:57.099405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:21.050 [2024-10-17 16:43:57.099421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.628 ms 00:29:21.050 [2024-10-17 16:43:57.099432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.050 [2024-10-17 16:43:57.099940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.050 [2024-10-17 16:43:57.099962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:21.050 [2024-10-17 16:43:57.099977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:29:21.050 [2024-10-17 16:43:57.099987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.050 [2024-10-17 16:43:57.102698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.050 [2024-10-17 16:43:57.102728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:21.050 [2024-10-17 16:43:57.102742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.683 ms 00:29:21.050 [2024-10-17 16:43:57.102756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.050 [2024-10-17 16:43:57.108050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.050 [2024-10-17 16:43:57.108083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:21.050 [2024-10-17 16:43:57.108102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.267 ms 00:29:21.050 [2024-10-17 16:43:57.108113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.050 [2024-10-17 16:43:57.146058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.050 [2024-10-17 16:43:57.146111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:21.050 [2024-10-17 16:43:57.146129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.908 ms 00:29:21.050 [2024-10-17 16:43:57.146139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.050 [2024-10-17 16:43:57.169653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.050 [2024-10-17 16:43:57.169717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:21.050 [2024-10-17 16:43:57.169736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.461 ms 00:29:21.050 [2024-10-17 16:43:57.169747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.050 [2024-10-17 16:43:57.170004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.050 [2024-10-17 16:43:57.170025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:21.050 [2024-10-17 16:43:57.170039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:29:21.050 [2024-10-17 16:43:57.170050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.050 [2024-10-17 16:43:57.208301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.050 [2024-10-17 16:43:57.208352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:21.050 [2024-10-17 16:43:57.208377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.278 ms 00:29:21.050 [2024-10-17 16:43:57.208404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.050 [2024-10-17 16:43:57.246410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.050 [2024-10-17 16:43:57.246461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:21.050 [2024-10-17 16:43:57.246479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.988 ms 00:29:21.050 [2024-10-17 16:43:57.246490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.050 [2024-10-17 16:43:57.284929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.050 [2024-10-17 16:43:57.285146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:21.050 [2024-10-17 16:43:57.285176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.432 ms 00:29:21.050 [2024-10-17 16:43:57.285187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.050 [2024-10-17 16:43:57.323993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.050 [2024-10-17 16:43:57.324065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:21.050 [2024-10-17 16:43:57.324086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.717 ms 00:29:21.050 [2024-10-17 16:43:57.324096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.050 [2024-10-17 16:43:57.324160] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:21.050 [2024-10-17 16:43:57.324179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:21.050 [2024-10-17 16:43:57.324585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.324989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:21.051 [2024-10-17 16:43:57.325577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:21.052 [2024-10-17 16:43:57.325597] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:21.052 [2024-10-17 16:43:57.325611] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 907264b3-0ff6-4ba0-b9b8-009b4bf6b5fb 00:29:21.052 [2024-10-17 16:43:57.325623] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:21.052 [2024-10-17 16:43:57.325638] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:21.052 [2024-10-17 16:43:57.325648] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:21.052 [2024-10-17 16:43:57.325661] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:21.052 [2024-10-17 16:43:57.325671] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:21.052 [2024-10-17 16:43:57.325685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:21.052 [2024-10-17 16:43:57.325719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:21.052 [2024-10-17 16:43:57.325731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:21.052 [2024-10-17 16:43:57.325740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:21.052 [2024-10-17 16:43:57.325754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.052 [2024-10-17 16:43:57.325765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:21.052 [2024-10-17 16:43:57.325779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.599 ms 00:29:21.052 [2024-10-17 16:43:57.325789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.316 [2024-10-17 16:43:57.347262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.316 [2024-10-17 16:43:57.347314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:21.316 [2024-10-17 16:43:57.347332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.412 ms 00:29:21.316 [2024-10-17 16:43:57.347347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.316 [2024-10-17 16:43:57.347977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:21.316 [2024-10-17 16:43:57.347991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:21.316 [2024-10-17 16:43:57.348005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:29:21.316 [2024-10-17 16:43:57.348016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.316 [2024-10-17 16:43:57.423774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.316 [2024-10-17 16:43:57.423839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:21.316 [2024-10-17 16:43:57.423857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.316 [2024-10-17 16:43:57.423872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.316 [2024-10-17 16:43:57.423962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.316 [2024-10-17 16:43:57.423974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:21.316 [2024-10-17 16:43:57.423987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.316 [2024-10-17 16:43:57.423998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.316 [2024-10-17 16:43:57.424156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.316 [2024-10-17 16:43:57.424171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:21.316 [2024-10-17 16:43:57.424185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.316 [2024-10-17 16:43:57.424196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.316 [2024-10-17 16:43:57.424232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.316 [2024-10-17 16:43:57.424245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:21.316 [2024-10-17 16:43:57.424258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.316 [2024-10-17 16:43:57.424268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.316 [2024-10-17 16:43:57.565295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.316 [2024-10-17 16:43:57.565547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:21.316 [2024-10-17 16:43:57.565578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.316 [2024-10-17 16:43:57.565594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.577 [2024-10-17 16:43:57.676317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.577 [2024-10-17 16:43:57.676388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:21.577 [2024-10-17 16:43:57.676407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.577 [2024-10-17 16:43:57.676418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.577 [2024-10-17 16:43:57.676574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.577 [2024-10-17 16:43:57.676587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:21.577 [2024-10-17 16:43:57.676601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.577 [2024-10-17 16:43:57.676611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.577 [2024-10-17 16:43:57.676696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.577 [2024-10-17 16:43:57.676712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:21.577 [2024-10-17 16:43:57.676743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.577 [2024-10-17 16:43:57.676755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.577 [2024-10-17 16:43:57.676892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.577 [2024-10-17 16:43:57.676907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:21.577 [2024-10-17 16:43:57.676920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.577 [2024-10-17 16:43:57.676931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.577 [2024-10-17 16:43:57.676987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.577 [2024-10-17 16:43:57.677003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:21.577 [2024-10-17 16:43:57.677020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.577 [2024-10-17 16:43:57.677030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.577 [2024-10-17 16:43:57.677079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.577 [2024-10-17 16:43:57.677090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:21.577 [2024-10-17 16:43:57.677103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.577 [2024-10-17 16:43:57.677113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.577 [2024-10-17 16:43:57.677174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:21.577 [2024-10-17 16:43:57.677189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:21.577 [2024-10-17 16:43:57.677202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:21.577 [2024-10-17 16:43:57.677212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:21.577 [2024-10-17 16:43:57.677394] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 584.036 ms, result 0 00:29:21.577 true 00:29:21.577 16:43:57 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74122 00:29:21.577 16:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 74122 ']' 00:29:21.577 16:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 74122 00:29:21.577 16:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:29:21.577 16:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:21.577 16:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74122 00:29:21.577 killing process with pid 74122 00:29:21.577 16:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:21.577 16:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:21.577 16:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74122' 00:29:21.577 16:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 74122 00:29:21.577 16:43:57 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 74122 00:29:26.849 16:44:02 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:26.849 16:44:02 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:29:26.849 16:44:02 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:29:26.849 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:26.849 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:29:26.849 16:44:02 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:29:26.849 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:29:26.849 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:26.849 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:26.849 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:26.849 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:26.849 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:29:26.850 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:26.850 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:26.850 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:29:26.850 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:26.850 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:26.850 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:26.850 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:26.850 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:29:26.850 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:26.850 16:44:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:29:26.850 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:29:26.850 fio-3.35 00:29:26.850 Starting 1 thread 00:29:32.119 00:29:32.119 test: (groupid=0, jobs=1): err= 0: pid=74356: Thu Oct 17 16:44:08 2024 00:29:32.119 read: IOPS=996, BW=66.2MiB/s (69.4MB/s)(255MiB/3845msec) 00:29:32.119 slat (nsec): min=4399, max=30213, avg=6625.82, stdev=3151.88 00:29:32.119 clat (usec): min=303, max=2618, avg=451.93, stdev=66.26 00:29:32.119 lat (usec): min=308, max=2623, avg=458.55, stdev=66.53 00:29:32.119 clat percentiles (usec): 00:29:32.119 | 1.00th=[ 334], 5.00th=[ 359], 10.00th=[ 392], 20.00th=[ 412], 00:29:32.119 | 30.00th=[ 420], 40.00th=[ 429], 50.00th=[ 437], 60.00th=[ 457], 00:29:32.119 | 70.00th=[ 482], 80.00th=[ 498], 90.00th=[ 523], 95.00th=[ 553], 00:29:32.119 | 99.00th=[ 594], 99.50th=[ 627], 99.90th=[ 742], 99.95th=[ 816], 00:29:32.119 | 99.99th=[ 2606] 00:29:32.119 write: IOPS=1003, BW=66.7MiB/s (69.9MB/s)(256MiB/3841msec); 0 zone resets 00:29:32.119 slat (nsec): min=15449, max=84139, avg=20610.72, stdev=5435.92 00:29:32.119 clat (usec): min=363, max=2074, avg=509.51, stdev=73.46 00:29:32.119 lat (usec): min=382, max=2093, avg=530.12, stdev=73.65 00:29:32.119 clat percentiles (usec): 00:29:32.119 | 1.00th=[ 416], 5.00th=[ 429], 10.00th=[ 441], 20.00th=[ 449], 00:29:32.119 | 30.00th=[ 461], 40.00th=[ 482], 50.00th=[ 506], 60.00th=[ 519], 00:29:32.119 | 70.00th=[ 529], 80.00th=[ 553], 90.00th=[ 594], 95.00th=[ 619], 00:29:32.119 | 99.00th=[ 766], 99.50th=[ 840], 99.90th=[ 889], 99.95th=[ 947], 00:29:32.119 | 99.99th=[ 2073] 00:29:32.119 bw ( KiB/s): min=66232, max=70176, per=99.84%, avg=68155.43, stdev=1418.64, samples=7 00:29:32.119 iops : min= 974, max= 1032, avg=1002.29, stdev=20.86, samples=7 00:29:32.119 lat (usec) : 500=65.00%, 750=34.30%, 1000=0.68% 00:29:32.119 lat (msec) : 4=0.03% 00:29:32.119 cpu : usr=99.12%, sys=0.23%, ctx=8, majf=0, minf=1169 00:29:32.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:32.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.119 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.119 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:32.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:32.119 00:29:32.119 Run status group 0 (all jobs): 00:29:32.119 READ: bw=66.2MiB/s (69.4MB/s), 66.2MiB/s-66.2MiB/s (69.4MB/s-69.4MB/s), io=255MiB (267MB), run=3845-3845msec 00:29:32.119 WRITE: bw=66.7MiB/s (69.9MB/s), 66.7MiB/s-66.7MiB/s (69.9MB/s-69.9MB/s), io=256MiB (269MB), run=3841-3841msec 00:29:34.023 ----------------------------------------------------- 00:29:34.023 Suppressions used: 00:29:34.023 count bytes template 00:29:34.023 1 5 /usr/src/fio/parse.c 00:29:34.023 1 8 libtcmalloc_minimal.so 00:29:34.023 1 904 libcrypto.so 00:29:34.023 ----------------------------------------------------- 00:29:34.023 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:34.023 16:44:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:29:34.283 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:29:34.283 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:29:34.283 fio-3.35 00:29:34.283 Starting 2 threads 00:30:00.864 00:30:00.864 first_half: (groupid=0, jobs=1): err= 0: pid=74459: Thu Oct 17 16:44:35 2024 00:30:00.864 read: IOPS=2820, BW=11.0MiB/s (11.6MB/s)(256MiB/23218msec) 00:30:00.864 slat (nsec): min=3462, max=29898, avg=6084.14, stdev=1750.57 00:30:00.864 clat (usec): min=599, max=253178, avg=38141.76, stdev=24004.26 00:30:00.864 lat (usec): min=603, max=253185, avg=38147.84, stdev=24004.52 00:30:00.864 clat percentiles (msec): 00:30:00.864 | 1.00th=[ 9], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:30:00.864 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:30:00.864 | 70.00th=[ 33], 80.00th=[ 38], 90.00th=[ 40], 95.00th=[ 81], 00:30:00.864 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 203], 99.95th=[ 220], 00:30:00.864 | 99.99th=[ 247] 00:30:00.864 write: IOPS=2826, BW=11.0MiB/s (11.6MB/s)(256MiB/23187msec); 0 zone resets 00:30:00.864 slat (usec): min=4, max=692, avg= 7.28, stdev= 5.93 00:30:00.864 clat (usec): min=394, max=44292, avg=7211.70, stdev=6683.86 00:30:00.864 lat (usec): min=407, max=44301, avg=7218.98, stdev=6683.97 00:30:00.864 clat percentiles (usec): 00:30:00.864 | 1.00th=[ 1045], 5.00th=[ 1369], 10.00th=[ 1680], 20.00th=[ 2999], 00:30:00.864 | 30.00th=[ 4080], 40.00th=[ 5276], 50.00th=[ 5866], 60.00th=[ 6587], 00:30:00.864 | 70.00th=[ 7177], 80.00th=[ 8455], 90.00th=[12387], 95.00th=[21103], 00:30:00.864 | 99.00th=[36963], 99.50th=[38536], 99.90th=[41157], 99.95th=[41681], 00:30:00.864 | 99.99th=[42730] 00:30:00.864 bw ( KiB/s): min= 720, max=53432, per=96.00%, avg=21706.00, stdev=15907.19, samples=24 00:30:00.864 iops : min= 180, max=13358, avg=5426.50, stdev=3976.80, samples=24 00:30:00.864 lat (usec) : 500=0.03%, 750=0.10%, 1000=0.27% 00:30:00.864 lat (msec) : 2=6.38%, 4=7.84%, 10=28.76%, 20=5.51%, 50=47.48% 00:30:00.864 lat (msec) : 100=1.69%, 250=1.96%, 500=0.01% 00:30:00.864 cpu : usr=99.24%, sys=0.18%, ctx=44, majf=0, minf=5534 00:30:00.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:00.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.864 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:00.864 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:00.864 second_half: (groupid=0, jobs=1): err= 0: pid=74460: Thu Oct 17 16:44:35 2024 00:30:00.864 read: IOPS=2842, BW=11.1MiB/s (11.6MB/s)(256MiB/23039msec) 00:30:00.864 slat (nsec): min=3487, max=93219, avg=6055.84, stdev=1853.52 00:30:00.864 clat (msec): min=9, max=202, avg=38.63, stdev=22.48 00:30:00.864 lat (msec): min=9, max=202, avg=38.63, stdev=22.48 00:30:00.864 clat percentiles (msec): 00:30:00.864 | 1.00th=[ 29], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 32], 00:30:00.864 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:30:00.864 | 70.00th=[ 34], 80.00th=[ 38], 90.00th=[ 42], 95.00th=[ 72], 00:30:00.864 | 99.00th=[ 165], 99.50th=[ 178], 99.90th=[ 190], 99.95th=[ 194], 00:30:00.864 | 99.99th=[ 199] 00:30:00.864 write: IOPS=2860, BW=11.2MiB/s (11.7MB/s)(256MiB/22914msec); 0 zone resets 00:30:00.864 slat (usec): min=3, max=451, avg= 7.22, stdev= 5.96 00:30:00.864 clat (usec): min=397, max=55799, avg=6378.81, stdev=3937.18 00:30:00.864 lat (usec): min=412, max=55805, avg=6386.03, stdev=3937.46 00:30:00.864 clat percentiles (usec): 00:30:00.864 | 1.00th=[ 1205], 5.00th=[ 1926], 10.00th=[ 2540], 20.00th=[ 3654], 00:30:00.864 | 30.00th=[ 4686], 40.00th=[ 5211], 50.00th=[ 5800], 60.00th=[ 6390], 00:30:00.864 | 70.00th=[ 6783], 80.00th=[ 7898], 90.00th=[11469], 95.00th=[12780], 00:30:00.864 | 99.00th=[20317], 99.50th=[24249], 99.90th=[49021], 99.95th=[53740], 00:30:00.864 | 99.99th=[55313] 00:30:00.864 bw ( KiB/s): min= 1000, max=41368, per=100.00%, avg=22795.13, stdev=14365.21, samples=23 00:30:00.864 iops : min= 250, max=10342, avg=5698.78, stdev=3591.30, samples=23 00:30:00.864 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.20% 00:30:00.864 lat (msec) : 2=2.50%, 4=9.71%, 10=30.46%, 20=6.62%, 50=46.72% 00:30:00.864 lat (msec) : 100=1.97%, 250=1.77% 00:30:00.864 cpu : usr=99.12%, sys=0.24%, ctx=43, majf=0, minf=5581 00:30:00.864 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:00.864 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.864 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:00.864 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.864 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:00.864 00:30:00.864 Run status group 0 (all jobs): 00:30:00.864 READ: bw=22.0MiB/s (23.1MB/s), 11.0MiB/s-11.1MiB/s (11.6MB/s-11.6MB/s), io=512MiB (536MB), run=23039-23218msec 00:30:00.864 WRITE: bw=22.1MiB/s (23.2MB/s), 11.0MiB/s-11.2MiB/s (11.6MB/s-11.7MB/s), io=512MiB (537MB), run=22914-23187msec 00:30:01.824 ----------------------------------------------------- 00:30:01.824 Suppressions used: 00:30:01.824 count bytes template 00:30:01.824 2 10 /usr/src/fio/parse.c 00:30:01.824 4 384 /usr/src/fio/iolog.c 00:30:01.824 1 8 libtcmalloc_minimal.so 00:30:01.824 1 904 libcrypto.so 00:30:01.824 ----------------------------------------------------- 00:30:01.824 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:01.824 16:44:37 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:30:02.089 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:30:02.089 fio-3.35 00:30:02.089 Starting 1 thread 00:30:16.980 00:30:16.980 test: (groupid=0, jobs=1): err= 0: pid=74773: Thu Oct 17 16:44:52 2024 00:30:16.980 read: IOPS=7776, BW=30.4MiB/s (31.9MB/s)(255MiB/8384msec) 00:30:16.980 slat (nsec): min=3441, max=32352, avg=5300.68, stdev=1677.44 00:30:16.980 clat (usec): min=607, max=244977, avg=16449.74, stdev=8171.63 00:30:16.980 lat (usec): min=626, max=244981, avg=16455.04, stdev=8171.62 00:30:16.980 clat percentiles (msec): 00:30:16.980 | 1.00th=[ 16], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 16], 00:30:16.980 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 17], 00:30:16.980 | 70.00th=[ 17], 80.00th=[ 17], 90.00th=[ 17], 95.00th=[ 18], 00:30:16.980 | 99.00th=[ 22], 99.50th=[ 27], 99.90th=[ 199], 99.95th=[ 203], 00:30:16.980 | 99.99th=[ 205] 00:30:16.980 write: IOPS=13.2k, BW=51.7MiB/s (54.2MB/s)(256MiB/4954msec); 0 zone resets 00:30:16.980 slat (usec): min=4, max=666, avg= 7.85, stdev= 5.93 00:30:16.980 clat (usec): min=551, max=59261, avg=9628.72, stdev=11892.23 00:30:16.980 lat (usec): min=559, max=59270, avg=9636.57, stdev=11892.25 00:30:16.980 clat percentiles (usec): 00:30:16.980 | 1.00th=[ 963], 5.00th=[ 1172], 10.00th=[ 1319], 20.00th=[ 1500], 00:30:16.980 | 30.00th=[ 1680], 40.00th=[ 2147], 50.00th=[ 6390], 60.00th=[ 7308], 00:30:16.980 | 70.00th=[ 8291], 80.00th=[10028], 90.00th=[34341], 95.00th=[36439], 00:30:16.980 | 99.00th=[42206], 99.50th=[47449], 99.90th=[56361], 99.95th=[56886], 00:30:16.980 | 99.99th=[58459] 00:30:16.980 bw ( KiB/s): min=40408, max=69448, per=99.08%, avg=52428.80, stdev=9467.76, samples=10 00:30:16.980 iops : min=10102, max=17362, avg=13107.20, stdev=2366.94, samples=10 00:30:16.980 lat (usec) : 750=0.01%, 1000=0.72% 00:30:16.980 lat (msec) : 2=18.71%, 4=1.68%, 10=18.98%, 20=51.30%, 50=8.30% 00:30:16.980 lat (msec) : 100=0.19%, 250=0.10% 00:30:16.980 cpu : usr=98.95%, sys=0.33%, ctx=28, majf=0, minf=5565 00:30:16.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:16.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.980 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:16.980 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:16.980 00:30:16.980 Run status group 0 (all jobs): 00:30:16.980 READ: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=255MiB (267MB), run=8384-8384msec 00:30:16.980 WRITE: bw=51.7MiB/s (54.2MB/s), 51.7MiB/s-51.7MiB/s (54.2MB/s-54.2MB/s), io=256MiB (268MB), run=4954-4954msec 00:30:18.884 ----------------------------------------------------- 00:30:18.884 Suppressions used: 00:30:18.884 count bytes template 00:30:18.884 1 5 /usr/src/fio/parse.c 00:30:18.884 2 192 /usr/src/fio/iolog.c 00:30:18.884 1 8 libtcmalloc_minimal.so 00:30:18.884 1 904 libcrypto.so 00:30:18.884 ----------------------------------------------------- 00:30:18.884 00:30:18.884 16:44:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:30:18.884 16:44:55 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:18.884 16:44:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:19.143 16:44:55 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:19.143 Remove shared memory files 00:30:19.143 16:44:55 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:30:19.143 16:44:55 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:19.143 16:44:55 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:30:19.143 16:44:55 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:30:19.143 16:44:55 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57752 /dev/shm/spdk_tgt_trace.pid73006 00:30:19.143 16:44:55 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:19.143 16:44:55 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:30:19.143 ************************************ 00:30:19.143 END TEST ftl_fio_basic 00:30:19.143 ************************************ 00:30:19.143 00:30:19.143 real 1m9.025s 00:30:19.143 user 2m29.783s 00:30:19.143 sys 0m3.941s 00:30:19.143 16:44:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:19.143 16:44:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:30:19.144 16:44:55 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:30:19.144 16:44:55 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:19.144 16:44:55 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:19.144 16:44:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:19.144 ************************************ 00:30:19.144 START TEST ftl_bdevperf 00:30:19.144 ************************************ 00:30:19.144 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:30:19.144 * Looking for test storage... 00:30:19.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:19.144 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:19.144 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:30:19.144 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:19.402 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:19.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.403 --rc genhtml_branch_coverage=1 00:30:19.403 --rc genhtml_function_coverage=1 00:30:19.403 --rc genhtml_legend=1 00:30:19.403 --rc geninfo_all_blocks=1 00:30:19.403 --rc geninfo_unexecuted_blocks=1 00:30:19.403 00:30:19.403 ' 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:19.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.403 --rc genhtml_branch_coverage=1 00:30:19.403 --rc genhtml_function_coverage=1 00:30:19.403 --rc genhtml_legend=1 00:30:19.403 --rc geninfo_all_blocks=1 00:30:19.403 --rc geninfo_unexecuted_blocks=1 00:30:19.403 00:30:19.403 ' 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:19.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.403 --rc genhtml_branch_coverage=1 00:30:19.403 --rc genhtml_function_coverage=1 00:30:19.403 --rc genhtml_legend=1 00:30:19.403 --rc geninfo_all_blocks=1 00:30:19.403 --rc geninfo_unexecuted_blocks=1 00:30:19.403 00:30:19.403 ' 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:19.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.403 --rc genhtml_branch_coverage=1 00:30:19.403 --rc genhtml_function_coverage=1 00:30:19.403 --rc genhtml_legend=1 00:30:19.403 --rc geninfo_all_blocks=1 00:30:19.403 --rc geninfo_unexecuted_blocks=1 00:30:19.403 00:30:19.403 ' 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:30:19.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75017 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75017 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 75017 ']' 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:19.403 16:44:55 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:19.403 [2024-10-17 16:44:55.654683] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:30:19.403 [2024-10-17 16:44:55.654823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75017 ] 00:30:19.662 [2024-10-17 16:44:55.822592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.662 [2024-10-17 16:44:55.944941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.230 16:44:56 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:20.230 16:44:56 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:30:20.230 16:44:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:20.230 16:44:56 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:30:20.230 16:44:56 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:20.230 16:44:56 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:30:20.230 16:44:56 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:30:20.230 16:44:56 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:20.798 16:44:56 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:20.798 16:44:56 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:30:20.798 16:44:56 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:20.798 16:44:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:30:20.798 16:44:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:20.798 16:44:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:30:20.798 16:44:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:30:20.798 16:44:56 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:20.798 16:44:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:20.798 { 00:30:20.798 "name": "nvme0n1", 00:30:20.798 "aliases": [ 00:30:20.798 "eaf61a4d-34d4-4b6a-8ad2-11b1d56b5f59" 00:30:20.798 ], 00:30:20.798 "product_name": "NVMe disk", 00:30:20.798 "block_size": 4096, 00:30:20.798 "num_blocks": 1310720, 00:30:20.798 "uuid": "eaf61a4d-34d4-4b6a-8ad2-11b1d56b5f59", 00:30:20.798 "numa_id": -1, 00:30:20.798 "assigned_rate_limits": { 00:30:20.798 "rw_ios_per_sec": 0, 00:30:20.798 "rw_mbytes_per_sec": 0, 00:30:20.798 "r_mbytes_per_sec": 0, 00:30:20.798 "w_mbytes_per_sec": 0 00:30:20.798 }, 00:30:20.798 "claimed": true, 00:30:20.798 "claim_type": "read_many_write_one", 00:30:20.798 "zoned": false, 00:30:20.798 "supported_io_types": { 00:30:20.798 "read": true, 00:30:20.798 "write": true, 00:30:20.798 "unmap": true, 00:30:20.798 "flush": true, 00:30:20.798 "reset": true, 00:30:20.798 "nvme_admin": true, 00:30:20.798 "nvme_io": true, 00:30:20.798 "nvme_io_md": false, 00:30:20.798 "write_zeroes": true, 00:30:20.798 "zcopy": false, 00:30:20.798 "get_zone_info": false, 00:30:20.798 "zone_management": false, 00:30:20.798 "zone_append": false, 00:30:20.798 "compare": true, 00:30:20.798 "compare_and_write": false, 00:30:20.798 "abort": true, 00:30:20.798 "seek_hole": false, 00:30:20.798 "seek_data": false, 00:30:20.798 "copy": true, 00:30:20.798 "nvme_iov_md": false 00:30:20.798 }, 00:30:20.798 "driver_specific": { 00:30:20.798 "nvme": [ 00:30:20.798 { 00:30:20.798 "pci_address": "0000:00:11.0", 00:30:20.798 "trid": { 00:30:20.798 "trtype": "PCIe", 00:30:20.798 "traddr": "0000:00:11.0" 00:30:20.798 }, 00:30:20.798 "ctrlr_data": { 00:30:20.798 "cntlid": 0, 00:30:20.798 "vendor_id": "0x1b36", 00:30:20.798 "model_number": "QEMU NVMe Ctrl", 00:30:20.798 "serial_number": "12341", 00:30:20.798 "firmware_revision": "8.0.0", 00:30:20.798 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:20.798 "oacs": { 00:30:20.798 "security": 0, 00:30:20.798 "format": 1, 00:30:20.798 "firmware": 0, 00:30:20.798 "ns_manage": 1 00:30:20.798 }, 00:30:20.798 "multi_ctrlr": false, 00:30:20.798 "ana_reporting": false 00:30:20.798 }, 00:30:20.798 "vs": { 00:30:20.798 "nvme_version": "1.4" 00:30:20.798 }, 00:30:20.798 "ns_data": { 00:30:20.798 "id": 1, 00:30:20.798 "can_share": false 00:30:20.798 } 00:30:20.798 } 00:30:20.798 ], 00:30:20.798 "mp_policy": "active_passive" 00:30:20.798 } 00:30:20.798 } 00:30:20.798 ]' 00:30:20.798 16:44:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:20.798 16:44:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:30:20.798 16:44:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:21.058 16:44:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:30:21.058 16:44:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:30:21.058 16:44:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:30:21.058 16:44:57 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:30:21.058 16:44:57 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:21.058 16:44:57 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:30:21.058 16:44:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:21.058 16:44:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:21.316 16:44:57 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=d78fa835-e2ce-45f6-9b70-f04d4a9ac1b7 00:30:21.316 16:44:57 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:30:21.316 16:44:57 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d78fa835-e2ce-45f6-9b70-f04d4a9ac1b7 00:30:21.574 16:44:57 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:21.575 16:44:57 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=3791e161-41c2-48fc-963e-f7d0473f98e7 00:30:21.575 16:44:57 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3791e161-41c2-48fc-963e-f7d0473f98e7 00:30:21.833 16:44:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=1bc674b6-d16d-454c-876a-8ed0533c1565 00:30:21.833 16:44:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1bc674b6-d16d-454c-876a-8ed0533c1565 00:30:21.833 16:44:58 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:30:21.833 16:44:58 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:21.833 16:44:58 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=1bc674b6-d16d-454c-876a-8ed0533c1565 00:30:21.833 16:44:58 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:30:21.833 16:44:58 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 1bc674b6-d16d-454c-876a-8ed0533c1565 00:30:21.833 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=1bc674b6-d16d-454c-876a-8ed0533c1565 00:30:21.833 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:21.833 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:30:21.833 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:30:21.833 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1bc674b6-d16d-454c-876a-8ed0533c1565 00:30:22.092 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:22.092 { 00:30:22.092 "name": "1bc674b6-d16d-454c-876a-8ed0533c1565", 00:30:22.092 "aliases": [ 00:30:22.092 "lvs/nvme0n1p0" 00:30:22.092 ], 00:30:22.092 "product_name": "Logical Volume", 00:30:22.092 "block_size": 4096, 00:30:22.092 "num_blocks": 26476544, 00:30:22.092 "uuid": "1bc674b6-d16d-454c-876a-8ed0533c1565", 00:30:22.092 "assigned_rate_limits": { 00:30:22.092 "rw_ios_per_sec": 0, 00:30:22.092 "rw_mbytes_per_sec": 0, 00:30:22.092 "r_mbytes_per_sec": 0, 00:30:22.092 "w_mbytes_per_sec": 0 00:30:22.092 }, 00:30:22.092 "claimed": false, 00:30:22.092 "zoned": false, 00:30:22.092 "supported_io_types": { 00:30:22.092 "read": true, 00:30:22.092 "write": true, 00:30:22.092 "unmap": true, 00:30:22.092 "flush": false, 00:30:22.092 "reset": true, 00:30:22.092 "nvme_admin": false, 00:30:22.092 "nvme_io": false, 00:30:22.092 "nvme_io_md": false, 00:30:22.092 "write_zeroes": true, 00:30:22.092 "zcopy": false, 00:30:22.092 "get_zone_info": false, 00:30:22.092 "zone_management": false, 00:30:22.092 "zone_append": false, 00:30:22.092 "compare": false, 00:30:22.092 "compare_and_write": false, 00:30:22.092 "abort": false, 00:30:22.092 "seek_hole": true, 00:30:22.092 "seek_data": true, 00:30:22.092 "copy": false, 00:30:22.092 "nvme_iov_md": false 00:30:22.092 }, 00:30:22.092 "driver_specific": { 00:30:22.092 "lvol": { 00:30:22.092 "lvol_store_uuid": "3791e161-41c2-48fc-963e-f7d0473f98e7", 00:30:22.092 "base_bdev": "nvme0n1", 00:30:22.092 "thin_provision": true, 00:30:22.092 "num_allocated_clusters": 0, 00:30:22.092 "snapshot": false, 00:30:22.092 "clone": false, 00:30:22.092 "esnap_clone": false 00:30:22.092 } 00:30:22.092 } 00:30:22.092 } 00:30:22.092 ]' 00:30:22.092 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:22.092 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:30:22.092 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:22.357 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:22.357 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:22.358 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:30:22.358 16:44:58 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:30:22.358 16:44:58 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:30:22.358 16:44:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:22.616 16:44:58 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:22.616 16:44:58 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:22.616 16:44:58 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 1bc674b6-d16d-454c-876a-8ed0533c1565 00:30:22.616 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=1bc674b6-d16d-454c-876a-8ed0533c1565 00:30:22.616 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:22.616 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:30:22.616 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:30:22.617 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1bc674b6-d16d-454c-876a-8ed0533c1565 00:30:22.875 16:44:58 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:22.875 { 00:30:22.875 "name": "1bc674b6-d16d-454c-876a-8ed0533c1565", 00:30:22.875 "aliases": [ 00:30:22.875 "lvs/nvme0n1p0" 00:30:22.875 ], 00:30:22.875 "product_name": "Logical Volume", 00:30:22.875 "block_size": 4096, 00:30:22.875 "num_blocks": 26476544, 00:30:22.875 "uuid": "1bc674b6-d16d-454c-876a-8ed0533c1565", 00:30:22.875 "assigned_rate_limits": { 00:30:22.875 "rw_ios_per_sec": 0, 00:30:22.875 "rw_mbytes_per_sec": 0, 00:30:22.875 "r_mbytes_per_sec": 0, 00:30:22.875 "w_mbytes_per_sec": 0 00:30:22.875 }, 00:30:22.875 "claimed": false, 00:30:22.875 "zoned": false, 00:30:22.875 "supported_io_types": { 00:30:22.875 "read": true, 00:30:22.875 "write": true, 00:30:22.875 "unmap": true, 00:30:22.875 "flush": false, 00:30:22.875 "reset": true, 00:30:22.875 "nvme_admin": false, 00:30:22.875 "nvme_io": false, 00:30:22.875 "nvme_io_md": false, 00:30:22.875 "write_zeroes": true, 00:30:22.875 "zcopy": false, 00:30:22.875 "get_zone_info": false, 00:30:22.875 "zone_management": false, 00:30:22.875 "zone_append": false, 00:30:22.875 "compare": false, 00:30:22.875 "compare_and_write": false, 00:30:22.875 "abort": false, 00:30:22.875 "seek_hole": true, 00:30:22.875 "seek_data": true, 00:30:22.875 "copy": false, 00:30:22.875 "nvme_iov_md": false 00:30:22.875 }, 00:30:22.875 "driver_specific": { 00:30:22.875 "lvol": { 00:30:22.875 "lvol_store_uuid": "3791e161-41c2-48fc-963e-f7d0473f98e7", 00:30:22.875 "base_bdev": "nvme0n1", 00:30:22.875 "thin_provision": true, 00:30:22.875 "num_allocated_clusters": 0, 00:30:22.875 "snapshot": false, 00:30:22.875 "clone": false, 00:30:22.875 "esnap_clone": false 00:30:22.875 } 00:30:22.875 } 00:30:22.875 } 00:30:22.875 ]' 00:30:22.875 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:22.875 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:30:22.875 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:22.875 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:22.875 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:22.875 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:30:22.875 16:44:59 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:30:22.875 16:44:59 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:23.133 16:44:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:30:23.133 16:44:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 1bc674b6-d16d-454c-876a-8ed0533c1565 00:30:23.133 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=1bc674b6-d16d-454c-876a-8ed0533c1565 00:30:23.133 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:23.133 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:30:23.133 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:30:23.133 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1bc674b6-d16d-454c-876a-8ed0533c1565 00:30:23.391 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:23.391 { 00:30:23.391 "name": "1bc674b6-d16d-454c-876a-8ed0533c1565", 00:30:23.391 "aliases": [ 00:30:23.391 "lvs/nvme0n1p0" 00:30:23.391 ], 00:30:23.391 "product_name": "Logical Volume", 00:30:23.391 "block_size": 4096, 00:30:23.391 "num_blocks": 26476544, 00:30:23.391 "uuid": "1bc674b6-d16d-454c-876a-8ed0533c1565", 00:30:23.391 "assigned_rate_limits": { 00:30:23.391 "rw_ios_per_sec": 0, 00:30:23.391 "rw_mbytes_per_sec": 0, 00:30:23.391 "r_mbytes_per_sec": 0, 00:30:23.391 "w_mbytes_per_sec": 0 00:30:23.391 }, 00:30:23.391 "claimed": false, 00:30:23.391 "zoned": false, 00:30:23.391 "supported_io_types": { 00:30:23.391 "read": true, 00:30:23.391 "write": true, 00:30:23.391 "unmap": true, 00:30:23.391 "flush": false, 00:30:23.391 "reset": true, 00:30:23.391 "nvme_admin": false, 00:30:23.391 "nvme_io": false, 00:30:23.391 "nvme_io_md": false, 00:30:23.391 "write_zeroes": true, 00:30:23.391 "zcopy": false, 00:30:23.391 "get_zone_info": false, 00:30:23.391 "zone_management": false, 00:30:23.391 "zone_append": false, 00:30:23.391 "compare": false, 00:30:23.391 "compare_and_write": false, 00:30:23.391 "abort": false, 00:30:23.391 "seek_hole": true, 00:30:23.391 "seek_data": true, 00:30:23.391 "copy": false, 00:30:23.391 "nvme_iov_md": false 00:30:23.391 }, 00:30:23.391 "driver_specific": { 00:30:23.391 "lvol": { 00:30:23.391 "lvol_store_uuid": "3791e161-41c2-48fc-963e-f7d0473f98e7", 00:30:23.391 "base_bdev": "nvme0n1", 00:30:23.391 "thin_provision": true, 00:30:23.391 "num_allocated_clusters": 0, 00:30:23.391 "snapshot": false, 00:30:23.391 "clone": false, 00:30:23.391 "esnap_clone": false 00:30:23.391 } 00:30:23.391 } 00:30:23.391 } 00:30:23.391 ]' 00:30:23.391 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:23.391 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:30:23.391 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:23.391 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:23.391 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:23.391 16:44:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:30:23.391 16:44:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:30:23.392 16:44:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1bc674b6-d16d-454c-876a-8ed0533c1565 -c nvc0n1p0 --l2p_dram_limit 20 00:30:23.652 [2024-10-17 16:44:59.840694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.652 [2024-10-17 16:44:59.840764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:23.652 [2024-10-17 16:44:59.840781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:23.652 [2024-10-17 16:44:59.840812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.652 [2024-10-17 16:44:59.840876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.652 [2024-10-17 16:44:59.840893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:23.652 [2024-10-17 16:44:59.840905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:23.652 [2024-10-17 16:44:59.840921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.652 [2024-10-17 16:44:59.840942] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:23.652 [2024-10-17 16:44:59.842107] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:23.652 [2024-10-17 16:44:59.842138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.652 [2024-10-17 16:44:59.842156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:23.652 [2024-10-17 16:44:59.842168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 00:30:23.652 [2024-10-17 16:44:59.842181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.652 [2024-10-17 16:44:59.842308] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9ce9474e-eb0e-471a-94fb-bd7c089128c0 00:30:23.652 [2024-10-17 16:44:59.843850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.652 [2024-10-17 16:44:59.843886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:23.652 [2024-10-17 16:44:59.843903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:23.652 [2024-10-17 16:44:59.843920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.652 [2024-10-17 16:44:59.851611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.652 [2024-10-17 16:44:59.851659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:23.652 [2024-10-17 16:44:59.851674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.647 ms 00:30:23.652 [2024-10-17 16:44:59.851685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.652 [2024-10-17 16:44:59.851807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.652 [2024-10-17 16:44:59.851839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:23.652 [2024-10-17 16:44:59.851863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:30:23.652 [2024-10-17 16:44:59.851874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.652 [2024-10-17 16:44:59.851927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.652 [2024-10-17 16:44:59.851940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:23.652 [2024-10-17 16:44:59.851955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:23.652 [2024-10-17 16:44:59.851965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.652 [2024-10-17 16:44:59.851993] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:23.652 [2024-10-17 16:44:59.857630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.652 [2024-10-17 16:44:59.857669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:23.652 [2024-10-17 16:44:59.857682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.657 ms 00:30:23.652 [2024-10-17 16:44:59.857694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.652 [2024-10-17 16:44:59.857743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.652 [2024-10-17 16:44:59.857757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:23.652 [2024-10-17 16:44:59.857772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:23.652 [2024-10-17 16:44:59.857784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.652 [2024-10-17 16:44:59.857836] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:23.652 [2024-10-17 16:44:59.857973] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:23.652 [2024-10-17 16:44:59.857991] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:23.652 [2024-10-17 16:44:59.858008] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:23.652 [2024-10-17 16:44:59.858021] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:23.652 [2024-10-17 16:44:59.858036] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:23.652 [2024-10-17 16:44:59.858064] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:23.652 [2024-10-17 16:44:59.858077] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:23.652 [2024-10-17 16:44:59.858088] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:23.652 [2024-10-17 16:44:59.858113] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:23.652 [2024-10-17 16:44:59.858123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.652 [2024-10-17 16:44:59.858136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:23.652 [2024-10-17 16:44:59.858147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:30:23.652 [2024-10-17 16:44:59.858164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.652 [2024-10-17 16:44:59.858234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.652 [2024-10-17 16:44:59.858248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:23.652 [2024-10-17 16:44:59.858259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:23.652 [2024-10-17 16:44:59.858274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.652 [2024-10-17 16:44:59.858374] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:23.652 [2024-10-17 16:44:59.858390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:23.652 [2024-10-17 16:44:59.858401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:23.652 [2024-10-17 16:44:59.858415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:23.652 [2024-10-17 16:44:59.858428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:23.652 [2024-10-17 16:44:59.858440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:23.652 [2024-10-17 16:44:59.858450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:23.652 [2024-10-17 16:44:59.858462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:23.652 [2024-10-17 16:44:59.858473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:23.652 [2024-10-17 16:44:59.858485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:23.652 [2024-10-17 16:44:59.858496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:23.652 [2024-10-17 16:44:59.858509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:23.652 [2024-10-17 16:44:59.858518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:23.652 [2024-10-17 16:44:59.858543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:23.652 [2024-10-17 16:44:59.858556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:23.652 [2024-10-17 16:44:59.858571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:23.652 [2024-10-17 16:44:59.858581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:23.652 [2024-10-17 16:44:59.858595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:23.652 [2024-10-17 16:44:59.858606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:23.652 [2024-10-17 16:44:59.858619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:23.652 [2024-10-17 16:44:59.858629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:23.652 [2024-10-17 16:44:59.858641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:23.652 [2024-10-17 16:44:59.858651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:23.652 [2024-10-17 16:44:59.858663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:23.652 [2024-10-17 16:44:59.858673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:23.652 [2024-10-17 16:44:59.858685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:23.652 [2024-10-17 16:44:59.858694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:23.652 [2024-10-17 16:44:59.858706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:23.652 [2024-10-17 16:44:59.858716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:23.652 [2024-10-17 16:44:59.858728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:23.652 [2024-10-17 16:44:59.858738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:23.652 [2024-10-17 16:44:59.858767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:23.652 [2024-10-17 16:44:59.858777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:23.652 [2024-10-17 16:44:59.858790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:23.652 [2024-10-17 16:44:59.858801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:23.652 [2024-10-17 16:44:59.858813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:23.652 [2024-10-17 16:44:59.858823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:23.652 [2024-10-17 16:44:59.858835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:23.652 [2024-10-17 16:44:59.858845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:23.652 [2024-10-17 16:44:59.858857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:23.652 [2024-10-17 16:44:59.858867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:23.652 [2024-10-17 16:44:59.858879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:23.652 [2024-10-17 16:44:59.858889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:23.652 [2024-10-17 16:44:59.858901] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:23.652 [2024-10-17 16:44:59.858912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:23.653 [2024-10-17 16:44:59.858927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:23.653 [2024-10-17 16:44:59.858938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:23.653 [2024-10-17 16:44:59.858954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:23.653 [2024-10-17 16:44:59.858966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:23.653 [2024-10-17 16:44:59.858978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:23.653 [2024-10-17 16:44:59.858989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:23.653 [2024-10-17 16:44:59.859001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:23.653 [2024-10-17 16:44:59.859011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:23.653 [2024-10-17 16:44:59.859040] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:23.653 [2024-10-17 16:44:59.859052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:23.653 [2024-10-17 16:44:59.859067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:23.653 [2024-10-17 16:44:59.859078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:23.653 [2024-10-17 16:44:59.859091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:23.653 [2024-10-17 16:44:59.859103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:23.653 [2024-10-17 16:44:59.859115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:23.653 [2024-10-17 16:44:59.859126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:23.653 [2024-10-17 16:44:59.859138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:23.653 [2024-10-17 16:44:59.859149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:23.653 [2024-10-17 16:44:59.859164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:23.653 [2024-10-17 16:44:59.859174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:23.653 [2024-10-17 16:44:59.859187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:23.653 [2024-10-17 16:44:59.859198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:23.653 [2024-10-17 16:44:59.859211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:23.653 [2024-10-17 16:44:59.859221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:23.653 [2024-10-17 16:44:59.859235] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:23.653 [2024-10-17 16:44:59.859246] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:23.653 [2024-10-17 16:44:59.859260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:23.653 [2024-10-17 16:44:59.859271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:23.653 [2024-10-17 16:44:59.859284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:23.653 [2024-10-17 16:44:59.859295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:23.653 [2024-10-17 16:44:59.859309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:23.653 [2024-10-17 16:44:59.859319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:23.653 [2024-10-17 16:44:59.859333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.990 ms 00:30:23.653 [2024-10-17 16:44:59.859346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:23.653 [2024-10-17 16:44:59.859387] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:23.653 [2024-10-17 16:44:59.859400] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:26.941 [2024-10-17 16:45:02.951134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.941 [2024-10-17 16:45:02.951378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:26.941 [2024-10-17 16:45:02.951486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3096.757 ms 00:30:26.941 [2024-10-17 16:45:02.951527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.941 [2024-10-17 16:45:02.992884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.941 [2024-10-17 16:45:02.993125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:26.941 [2024-10-17 16:45:02.993242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.049 ms 00:30:26.941 [2024-10-17 16:45:02.993283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.941 [2024-10-17 16:45:02.993479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.941 [2024-10-17 16:45:02.993531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:26.941 [2024-10-17 16:45:02.993633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:30:26.941 [2024-10-17 16:45:02.993665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.941 [2024-10-17 16:45:03.059085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.941 [2024-10-17 16:45:03.059309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:26.941 [2024-10-17 16:45:03.059429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.404 ms 00:30:26.941 [2024-10-17 16:45:03.059468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.941 [2024-10-17 16:45:03.059540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.941 [2024-10-17 16:45:03.059628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:26.941 [2024-10-17 16:45:03.059669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:26.941 [2024-10-17 16:45:03.059716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.941 [2024-10-17 16:45:03.060292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.941 [2024-10-17 16:45:03.060436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:26.941 [2024-10-17 16:45:03.060528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:30:26.941 [2024-10-17 16:45:03.060567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.941 [2024-10-17 16:45:03.060732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.941 [2024-10-17 16:45:03.060774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:26.941 [2024-10-17 16:45:03.060864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:30:26.941 [2024-10-17 16:45:03.060901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.941 [2024-10-17 16:45:03.080784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.941 [2024-10-17 16:45:03.080947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:26.941 [2024-10-17 16:45:03.081064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.865 ms 00:30:26.941 [2024-10-17 16:45:03.081105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.941 [2024-10-17 16:45:03.094905] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:30:26.941 [2024-10-17 16:45:03.101144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.941 [2024-10-17 16:45:03.101296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:26.941 [2024-10-17 16:45:03.101412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.939 ms 00:30:26.941 [2024-10-17 16:45:03.101466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.941 [2024-10-17 16:45:03.182596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.941 [2024-10-17 16:45:03.182854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:26.941 [2024-10-17 16:45:03.182992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.202 ms 00:30:26.941 [2024-10-17 16:45:03.183035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.941 [2024-10-17 16:45:03.183253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.941 [2024-10-17 16:45:03.183323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:26.941 [2024-10-17 16:45:03.183370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:30:26.941 [2024-10-17 16:45:03.183402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:26.941 [2024-10-17 16:45:03.222417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:26.941 [2024-10-17 16:45:03.222606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:26.941 [2024-10-17 16:45:03.222722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.998 ms 00:30:26.941 [2024-10-17 16:45:03.222768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.201 [2024-10-17 16:45:03.261557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.201 [2024-10-17 16:45:03.261750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:27.201 [2024-10-17 16:45:03.261777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.778 ms 00:30:27.201 [2024-10-17 16:45:03.261791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.201 [2024-10-17 16:45:03.262485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.201 [2024-10-17 16:45:03.262506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:27.201 [2024-10-17 16:45:03.262518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:30:27.201 [2024-10-17 16:45:03.262532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.201 [2024-10-17 16:45:03.367883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.201 [2024-10-17 16:45:03.367950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:27.201 [2024-10-17 16:45:03.367971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.454 ms 00:30:27.201 [2024-10-17 16:45:03.367985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.201 [2024-10-17 16:45:03.407447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.201 [2024-10-17 16:45:03.407682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:27.201 [2024-10-17 16:45:03.407726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.429 ms 00:30:27.201 [2024-10-17 16:45:03.407742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.201 [2024-10-17 16:45:03.448302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.201 [2024-10-17 16:45:03.448366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:27.201 [2024-10-17 16:45:03.448389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.509 ms 00:30:27.201 [2024-10-17 16:45:03.448402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.201 [2024-10-17 16:45:03.486453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.201 [2024-10-17 16:45:03.486504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:27.201 [2024-10-17 16:45:03.486519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.028 ms 00:30:27.201 [2024-10-17 16:45:03.486533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.201 [2024-10-17 16:45:03.486579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.201 [2024-10-17 16:45:03.486600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:27.201 [2024-10-17 16:45:03.486612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:27.201 [2024-10-17 16:45:03.486625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.201 [2024-10-17 16:45:03.486753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.201 [2024-10-17 16:45:03.486770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:27.201 [2024-10-17 16:45:03.486782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:27.201 [2024-10-17 16:45:03.486795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.201 [2024-10-17 16:45:03.487954] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3652.702 ms, result 0 00:30:27.201 { 00:30:27.201 "name": "ftl0", 00:30:27.201 "uuid": "9ce9474e-eb0e-471a-94fb-bd7c089128c0" 00:30:27.201 } 00:30:27.460 16:45:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:30:27.460 16:45:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:30:27.460 16:45:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:30:27.460 16:45:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:30:27.720 [2024-10-17 16:45:03.851878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:30:27.720 I/O size of 69632 is greater than zero copy threshold (65536). 00:30:27.720 Zero copy mechanism will not be used. 00:30:27.720 Running I/O for 4 seconds... 00:30:29.611 2025.00 IOPS, 134.47 MiB/s [2024-10-17T16:45:07.286Z] 2005.50 IOPS, 133.18 MiB/s [2024-10-17T16:45:08.222Z] 2013.67 IOPS, 133.72 MiB/s [2024-10-17T16:45:08.222Z] 2017.25 IOPS, 133.96 MiB/s 00:30:31.923 Latency(us) 00:30:31.923 [2024-10-17T16:45:08.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.924 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:30:31.924 ftl0 : 4.00 2016.37 133.90 0.00 0.00 520.28 199.04 2434.57 00:30:31.924 [2024-10-17T16:45:08.223Z] =================================================================================================================== 00:30:31.924 [2024-10-17T16:45:08.223Z] Total : 2016.37 133.90 0.00 0.00 520.28 199.04 2434.57 00:30:31.924 [2024-10-17 16:45:07.857649] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:30:31.924 { 00:30:31.924 "results": [ 00:30:31.924 { 00:30:31.924 "job": "ftl0", 00:30:31.924 "core_mask": "0x1", 00:30:31.924 "workload": "randwrite", 00:30:31.924 "status": "finished", 00:30:31.924 "queue_depth": 1, 00:30:31.924 "io_size": 69632, 00:30:31.924 "runtime": 4.002233, 00:30:31.924 "iops": 2016.3743590140805, 00:30:31.924 "mibps": 133.8998597782788, 00:30:31.924 "io_failed": 0, 00:30:31.924 "io_timeout": 0, 00:30:31.924 "avg_latency_us": 520.2829936847763, 00:30:31.924 "min_latency_us": 199.0425702811245, 00:30:31.924 "max_latency_us": 2434.570281124498 00:30:31.924 } 00:30:31.924 ], 00:30:31.924 "core_count": 1 00:30:31.924 } 00:30:31.924 16:45:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:30:31.924 [2024-10-17 16:45:07.995528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:30:31.924 Running I/O for 4 seconds... 00:30:33.793 10899.00 IOPS, 42.57 MiB/s [2024-10-17T16:45:11.029Z] 10606.00 IOPS, 41.43 MiB/s [2024-10-17T16:45:12.405Z] 10294.33 IOPS, 40.21 MiB/s [2024-10-17T16:45:12.405Z] 10157.75 IOPS, 39.68 MiB/s 00:30:36.106 Latency(us) 00:30:36.106 [2024-10-17T16:45:12.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.106 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:30:36.106 ftl0 : 4.02 10127.80 39.56 0.00 0.00 12598.58 230.30 37689.78 00:30:36.106 [2024-10-17T16:45:12.405Z] =================================================================================================================== 00:30:36.106 [2024-10-17T16:45:12.405Z] Total : 10127.80 39.56 0.00 0.00 12598.58 0.00 37689.78 00:30:36.106 [2024-10-17 16:45:12.025187] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:30:36.106 { 00:30:36.106 "results": [ 00:30:36.106 { 00:30:36.106 "job": "ftl0", 00:30:36.106 "core_mask": "0x1", 00:30:36.106 "workload": "randwrite", 00:30:36.106 "status": "finished", 00:30:36.106 "queue_depth": 128, 00:30:36.106 "io_size": 4096, 00:30:36.106 "runtime": 4.024467, 00:30:36.106 "iops": 10127.800774611893, 00:30:36.106 "mibps": 39.561721775827706, 00:30:36.106 "io_failed": 0, 00:30:36.106 "io_timeout": 0, 00:30:36.106 "avg_latency_us": 12598.584732137411, 00:30:36.106 "min_latency_us": 230.29718875502007, 00:30:36.106 "max_latency_us": 37689.77991967872 00:30:36.106 } 00:30:36.106 ], 00:30:36.106 "core_count": 1 00:30:36.106 } 00:30:36.106 16:45:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:30:36.106 [2024-10-17 16:45:12.140444] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:30:36.106 Running I/O for 4 seconds... 00:30:37.976 8244.00 IOPS, 32.20 MiB/s [2024-10-17T16:45:15.213Z] 7915.50 IOPS, 30.92 MiB/s [2024-10-17T16:45:16.203Z] 7822.33 IOPS, 30.56 MiB/s [2024-10-17T16:45:16.203Z] 7845.50 IOPS, 30.65 MiB/s 00:30:39.904 Latency(us) 00:30:39.904 [2024-10-17T16:45:16.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.904 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:39.904 Verification LBA range: start 0x0 length 0x1400000 00:30:39.904 ftl0 : 4.01 7854.08 30.68 0.00 0.00 16245.52 266.49 33478.63 00:30:39.904 [2024-10-17T16:45:16.203Z] =================================================================================================================== 00:30:39.904 [2024-10-17T16:45:16.203Z] Total : 7854.08 30.68 0.00 0.00 16245.52 0.00 33478.63 00:30:39.904 [2024-10-17 16:45:16.165532] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:30:39.904 { 00:30:39.904 "results": [ 00:30:39.904 { 00:30:39.904 "job": "ftl0", 00:30:39.904 "core_mask": "0x1", 00:30:39.904 "workload": "verify", 00:30:39.904 "status": "finished", 00:30:39.904 "verify_range": { 00:30:39.904 "start": 0, 00:30:39.904 "length": 20971520 00:30:39.904 }, 00:30:39.904 "queue_depth": 128, 00:30:39.904 "io_size": 4096, 00:30:39.904 "runtime": 4.011801, 00:30:39.904 "iops": 7854.078504890946, 00:30:39.904 "mibps": 30.679994159730256, 00:30:39.904 "io_failed": 0, 00:30:39.904 "io_timeout": 0, 00:30:39.904 "avg_latency_us": 16245.521315679423, 00:30:39.904 "min_latency_us": 266.4867469879518, 00:30:39.904 "max_latency_us": 33478.631325301205 00:30:39.904 } 00:30:39.904 ], 00:30:39.904 "core_count": 1 00:30:39.904 } 00:30:40.163 16:45:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:30:40.163 [2024-10-17 16:45:16.400736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.163 [2024-10-17 16:45:16.400809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:40.163 [2024-10-17 16:45:16.400829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:40.163 [2024-10-17 16:45:16.400844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.164 [2024-10-17 16:45:16.400875] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:40.164 [2024-10-17 16:45:16.405306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.164 [2024-10-17 16:45:16.405345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:40.164 [2024-10-17 16:45:16.405363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.412 ms 00:30:40.164 [2024-10-17 16:45:16.405374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.164 [2024-10-17 16:45:16.407129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.164 [2024-10-17 16:45:16.407171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:40.164 [2024-10-17 16:45:16.407189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.720 ms 00:30:40.164 [2024-10-17 16:45:16.407200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.423 [2024-10-17 16:45:16.618114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.423 [2024-10-17 16:45:16.618189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:40.423 [2024-10-17 16:45:16.618219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 211.217 ms 00:30:40.423 [2024-10-17 16:45:16.618230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.423 [2024-10-17 16:45:16.623479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.423 [2024-10-17 16:45:16.623668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:40.423 [2024-10-17 16:45:16.623723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.205 ms 00:30:40.423 [2024-10-17 16:45:16.623737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.423 [2024-10-17 16:45:16.663028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.423 [2024-10-17 16:45:16.663098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:40.423 [2024-10-17 16:45:16.663119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.238 ms 00:30:40.423 [2024-10-17 16:45:16.663130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.423 [2024-10-17 16:45:16.685679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.423 [2024-10-17 16:45:16.685950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:40.423 [2024-10-17 16:45:16.685987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.515 ms 00:30:40.423 [2024-10-17 16:45:16.686002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.423 [2024-10-17 16:45:16.686185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.423 [2024-10-17 16:45:16.686200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:40.423 [2024-10-17 16:45:16.686219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:30:40.423 [2024-10-17 16:45:16.686229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.689 [2024-10-17 16:45:16.724678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.689 [2024-10-17 16:45:16.724932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:40.689 [2024-10-17 16:45:16.724965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.479 ms 00:30:40.689 [2024-10-17 16:45:16.724976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.689 [2024-10-17 16:45:16.764029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.689 [2024-10-17 16:45:16.764261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:40.689 [2024-10-17 16:45:16.764292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.016 ms 00:30:40.689 [2024-10-17 16:45:16.764303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.689 [2024-10-17 16:45:16.800865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.689 [2024-10-17 16:45:16.800913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:40.689 [2024-10-17 16:45:16.800931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.532 ms 00:30:40.689 [2024-10-17 16:45:16.800941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.690 [2024-10-17 16:45:16.837428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.690 [2024-10-17 16:45:16.837486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:40.690 [2024-10-17 16:45:16.837510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.436 ms 00:30:40.690 [2024-10-17 16:45:16.837521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.690 [2024-10-17 16:45:16.837571] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:40.690 [2024-10-17 16:45:16.837590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:40.690 [2024-10-17 16:45:16.837606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:40.690 [2024-10-17 16:45:16.837618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:40.690 [2024-10-17 16:45:16.837632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:40.690 [2024-10-17 16:45:16.837642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:40.690 [2024-10-17 16:45:16.837656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:40.690 [2024-10-17 16:45:16.837668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:40.690 [2024-10-17 16:45:16.837681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:40.690 [2024-10-17 16:45:16.837692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:40.690 [2024-10-17 16:45:16.837719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.837997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:40.691 [2024-10-17 16:45:16.838008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:40.692 [2024-10-17 16:45:16.838244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:40.693 [2024-10-17 16:45:16.838463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:40.694 [2024-10-17 16:45:16.838742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:40.695 [2024-10-17 16:45:16.838755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:40.695 [2024-10-17 16:45:16.838766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:40.695 [2024-10-17 16:45:16.838781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:40.695 [2024-10-17 16:45:16.838792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:40.695 [2024-10-17 16:45:16.838806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:40.695 [2024-10-17 16:45:16.838817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:40.695 [2024-10-17 16:45:16.838830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:40.695 [2024-10-17 16:45:16.838841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:40.695 [2024-10-17 16:45:16.838854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:40.695 [2024-10-17 16:45:16.838873] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:40.695 [2024-10-17 16:45:16.838886] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9ce9474e-eb0e-471a-94fb-bd7c089128c0 00:30:40.695 [2024-10-17 16:45:16.838897] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:40.695 [2024-10-17 16:45:16.838909] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:40.695 [2024-10-17 16:45:16.838919] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:40.695 [2024-10-17 16:45:16.838932] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:40.695 [2024-10-17 16:45:16.838945] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:40.695 [2024-10-17 16:45:16.838957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:40.695 [2024-10-17 16:45:16.838968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:40.695 [2024-10-17 16:45:16.838982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:40.695 [2024-10-17 16:45:16.838991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:40.695 [2024-10-17 16:45:16.839004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.695 [2024-10-17 16:45:16.839015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:40.695 [2024-10-17 16:45:16.839029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:30:40.695 [2024-10-17 16:45:16.839039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.695 [2024-10-17 16:45:16.859021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.695 [2024-10-17 16:45:16.859180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:40.695 [2024-10-17 16:45:16.859212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.927 ms 00:30:40.695 [2024-10-17 16:45:16.859222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.695 [2024-10-17 16:45:16.859806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:40.696 [2024-10-17 16:45:16.859819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:40.696 [2024-10-17 16:45:16.859833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:30:40.696 [2024-10-17 16:45:16.859843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.696 [2024-10-17 16:45:16.915642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.696 [2024-10-17 16:45:16.915710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:40.696 [2024-10-17 16:45:16.915731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.696 [2024-10-17 16:45:16.915741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.696 [2024-10-17 16:45:16.915812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.696 [2024-10-17 16:45:16.915824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:40.696 [2024-10-17 16:45:16.915837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.696 [2024-10-17 16:45:16.915847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.696 [2024-10-17 16:45:16.915942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.696 [2024-10-17 16:45:16.915956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:40.696 [2024-10-17 16:45:16.915969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.696 [2024-10-17 16:45:16.915983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.696 [2024-10-17 16:45:16.916003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.696 [2024-10-17 16:45:16.916013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:40.696 [2024-10-17 16:45:16.916026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.696 [2024-10-17 16:45:16.916036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.957 [2024-10-17 16:45:17.043651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.957 [2024-10-17 16:45:17.043713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:40.957 [2024-10-17 16:45:17.043739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.957 [2024-10-17 16:45:17.043750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.957 [2024-10-17 16:45:17.145254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.957 [2024-10-17 16:45:17.145317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:40.957 [2024-10-17 16:45:17.145335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.957 [2024-10-17 16:45:17.145346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.957 [2024-10-17 16:45:17.145467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.957 [2024-10-17 16:45:17.145480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:40.957 [2024-10-17 16:45:17.145494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.957 [2024-10-17 16:45:17.145504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.957 [2024-10-17 16:45:17.145567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.957 [2024-10-17 16:45:17.145579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:40.957 [2024-10-17 16:45:17.145592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.957 [2024-10-17 16:45:17.145602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.957 [2024-10-17 16:45:17.145740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.957 [2024-10-17 16:45:17.145755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:40.957 [2024-10-17 16:45:17.145771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.957 [2024-10-17 16:45:17.145781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.957 [2024-10-17 16:45:17.145846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.957 [2024-10-17 16:45:17.145858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:40.957 [2024-10-17 16:45:17.145871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.957 [2024-10-17 16:45:17.145881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.957 [2024-10-17 16:45:17.145921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.957 [2024-10-17 16:45:17.145932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:40.957 [2024-10-17 16:45:17.145946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.957 [2024-10-17 16:45:17.145956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.957 [2024-10-17 16:45:17.146005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.957 [2024-10-17 16:45:17.146027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:40.957 [2024-10-17 16:45:17.146041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.957 [2024-10-17 16:45:17.146050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.957 [2024-10-17 16:45:17.146180] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 746.628 ms, result 0 00:30:40.957 true 00:30:40.957 16:45:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75017 00:30:40.957 16:45:17 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 75017 ']' 00:30:40.957 16:45:17 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 75017 00:30:40.957 16:45:17 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:30:40.957 16:45:17 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:40.957 16:45:17 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75017 00:30:40.957 killing process with pid 75017 00:30:40.957 Received shutdown signal, test time was about 4.000000 seconds 00:30:40.957 00:30:40.957 Latency(us) 00:30:40.957 [2024-10-17T16:45:17.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.957 [2024-10-17T16:45:17.256Z] =================================================================================================================== 00:30:40.958 [2024-10-17T16:45:17.257Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:40.958 16:45:17 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:40.958 16:45:17 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:40.958 16:45:17 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75017' 00:30:40.958 16:45:17 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 75017 00:30:40.958 16:45:17 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 75017 00:30:45.160 Remove shared memory files 00:30:45.160 16:45:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:30:45.160 16:45:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:30:45.160 16:45:20 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:45.160 16:45:20 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:30:45.160 16:45:20 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:30:45.160 16:45:20 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:30:45.160 16:45:20 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:45.160 16:45:20 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:30:45.160 ************************************ 00:30:45.160 END TEST ftl_bdevperf 00:30:45.160 ************************************ 00:30:45.160 00:30:45.160 real 0m25.621s 00:30:45.160 user 0m28.519s 00:30:45.160 sys 0m1.298s 00:30:45.160 16:45:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:45.160 16:45:20 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:45.160 16:45:20 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:30:45.160 16:45:20 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:45.160 16:45:20 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:45.160 16:45:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:45.160 ************************************ 00:30:45.160 START TEST ftl_trim 00:30:45.160 ************************************ 00:30:45.160 16:45:20 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:30:45.160 * Looking for test storage... 00:30:45.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:45.160 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:45.160 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:45.160 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:30:45.160 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.160 16:45:21 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:30:45.160 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.160 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:45.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.160 --rc genhtml_branch_coverage=1 00:30:45.160 --rc genhtml_function_coverage=1 00:30:45.160 --rc genhtml_legend=1 00:30:45.160 --rc geninfo_all_blocks=1 00:30:45.160 --rc geninfo_unexecuted_blocks=1 00:30:45.160 00:30:45.160 ' 00:30:45.160 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:45.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.160 --rc genhtml_branch_coverage=1 00:30:45.160 --rc genhtml_function_coverage=1 00:30:45.160 --rc genhtml_legend=1 00:30:45.160 --rc geninfo_all_blocks=1 00:30:45.160 --rc geninfo_unexecuted_blocks=1 00:30:45.160 00:30:45.160 ' 00:30:45.160 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:45.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.160 --rc genhtml_branch_coverage=1 00:30:45.160 --rc genhtml_function_coverage=1 00:30:45.160 --rc genhtml_legend=1 00:30:45.160 --rc geninfo_all_blocks=1 00:30:45.160 --rc geninfo_unexecuted_blocks=1 00:30:45.160 00:30:45.160 ' 00:30:45.160 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:45.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.160 --rc genhtml_branch_coverage=1 00:30:45.160 --rc genhtml_function_coverage=1 00:30:45.160 --rc genhtml_legend=1 00:30:45.160 --rc geninfo_all_blocks=1 00:30:45.160 --rc geninfo_unexecuted_blocks=1 00:30:45.160 00:30:45.160 ' 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:45.160 16:45:21 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75375 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75375 00:30:45.161 16:45:21 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:30:45.161 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 75375 ']' 00:30:45.161 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.161 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:45.161 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.161 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:45.161 16:45:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:30:45.161 [2024-10-17 16:45:21.349149] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:30:45.161 [2024-10-17 16:45:21.349512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75375 ] 00:30:45.420 [2024-10-17 16:45:21.527553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:45.420 [2024-10-17 16:45:21.656487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.420 [2024-10-17 16:45:21.656583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.420 [2024-10-17 16:45:21.656617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:46.359 16:45:22 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:46.359 16:45:22 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:30:46.359 16:45:22 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:46.359 16:45:22 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:30:46.359 16:45:22 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:46.359 16:45:22 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:30:46.359 16:45:22 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:30:46.359 16:45:22 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:46.926 16:45:22 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:46.926 16:45:22 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:30:46.926 16:45:22 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:46.926 16:45:22 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:30:46.926 16:45:22 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:46.926 16:45:22 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:30:46.926 16:45:22 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:30:46.926 16:45:22 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:46.926 16:45:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:46.926 { 00:30:46.926 "name": "nvme0n1", 00:30:46.926 "aliases": [ 00:30:46.926 "2ba50c89-0807-432f-8720-b914d9107357" 00:30:46.926 ], 00:30:46.926 "product_name": "NVMe disk", 00:30:46.926 "block_size": 4096, 00:30:46.926 "num_blocks": 1310720, 00:30:46.926 "uuid": "2ba50c89-0807-432f-8720-b914d9107357", 00:30:46.926 "numa_id": -1, 00:30:46.926 "assigned_rate_limits": { 00:30:46.926 "rw_ios_per_sec": 0, 00:30:46.926 "rw_mbytes_per_sec": 0, 00:30:46.926 "r_mbytes_per_sec": 0, 00:30:46.926 "w_mbytes_per_sec": 0 00:30:46.926 }, 00:30:46.927 "claimed": true, 00:30:46.927 "claim_type": "read_many_write_one", 00:30:46.927 "zoned": false, 00:30:46.927 "supported_io_types": { 00:30:46.927 "read": true, 00:30:46.927 "write": true, 00:30:46.927 "unmap": true, 00:30:46.927 "flush": true, 00:30:46.927 "reset": true, 00:30:46.927 "nvme_admin": true, 00:30:46.927 "nvme_io": true, 00:30:46.927 "nvme_io_md": false, 00:30:46.927 "write_zeroes": true, 00:30:46.927 "zcopy": false, 00:30:46.927 "get_zone_info": false, 00:30:46.927 "zone_management": false, 00:30:46.927 "zone_append": false, 00:30:46.927 "compare": true, 00:30:46.927 "compare_and_write": false, 00:30:46.927 "abort": true, 00:30:46.927 "seek_hole": false, 00:30:46.927 "seek_data": false, 00:30:46.927 "copy": true, 00:30:46.927 "nvme_iov_md": false 00:30:46.927 }, 00:30:46.927 "driver_specific": { 00:30:46.927 "nvme": [ 00:30:46.927 { 00:30:46.927 "pci_address": "0000:00:11.0", 00:30:46.927 "trid": { 00:30:46.927 "trtype": "PCIe", 00:30:46.927 "traddr": "0000:00:11.0" 00:30:46.927 }, 00:30:46.927 "ctrlr_data": { 00:30:46.927 "cntlid": 0, 00:30:46.927 "vendor_id": "0x1b36", 00:30:46.927 "model_number": "QEMU NVMe Ctrl", 00:30:46.927 "serial_number": "12341", 00:30:46.927 "firmware_revision": "8.0.0", 00:30:46.927 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:46.927 "oacs": { 00:30:46.927 "security": 0, 00:30:46.927 "format": 1, 00:30:46.927 "firmware": 0, 00:30:46.927 "ns_manage": 1 00:30:46.927 }, 00:30:46.927 "multi_ctrlr": false, 00:30:46.927 "ana_reporting": false 00:30:46.927 }, 00:30:46.927 "vs": { 00:30:46.927 "nvme_version": "1.4" 00:30:46.927 }, 00:30:46.927 "ns_data": { 00:30:46.927 "id": 1, 00:30:46.927 "can_share": false 00:30:46.927 } 00:30:46.927 } 00:30:46.927 ], 00:30:46.927 "mp_policy": "active_passive" 00:30:46.927 } 00:30:46.927 } 00:30:46.927 ]' 00:30:46.927 16:45:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:46.927 16:45:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:30:46.927 16:45:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:47.185 16:45:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:30:47.185 16:45:23 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:30:47.185 16:45:23 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:30:47.185 16:45:23 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:30:47.185 16:45:23 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:47.185 16:45:23 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:30:47.185 16:45:23 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:47.185 16:45:23 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:47.185 16:45:23 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=3791e161-41c2-48fc-963e-f7d0473f98e7 00:30:47.185 16:45:23 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:30:47.185 16:45:23 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3791e161-41c2-48fc-963e-f7d0473f98e7 00:30:47.446 16:45:23 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:47.720 16:45:23 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=49615d30-6c7c-4e2f-9373-7fb7ee87cb60 00:30:47.720 16:45:23 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 49615d30-6c7c-4e2f-9373-7fb7ee87cb60 00:30:47.994 16:45:24 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=67aaa761-5702-40f8-af98-99eee5eaff8c 00:30:47.994 16:45:24 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 67aaa761-5702-40f8-af98-99eee5eaff8c 00:30:47.994 16:45:24 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:30:47.994 16:45:24 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:47.994 16:45:24 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=67aaa761-5702-40f8-af98-99eee5eaff8c 00:30:47.994 16:45:24 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:30:47.994 16:45:24 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 67aaa761-5702-40f8-af98-99eee5eaff8c 00:30:47.994 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=67aaa761-5702-40f8-af98-99eee5eaff8c 00:30:47.994 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:47.994 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:30:47.994 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:30:47.994 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 67aaa761-5702-40f8-af98-99eee5eaff8c 00:30:48.253 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:48.253 { 00:30:48.253 "name": "67aaa761-5702-40f8-af98-99eee5eaff8c", 00:30:48.253 "aliases": [ 00:30:48.253 "lvs/nvme0n1p0" 00:30:48.253 ], 00:30:48.253 "product_name": "Logical Volume", 00:30:48.253 "block_size": 4096, 00:30:48.253 "num_blocks": 26476544, 00:30:48.253 "uuid": "67aaa761-5702-40f8-af98-99eee5eaff8c", 00:30:48.253 "assigned_rate_limits": { 00:30:48.253 "rw_ios_per_sec": 0, 00:30:48.253 "rw_mbytes_per_sec": 0, 00:30:48.253 "r_mbytes_per_sec": 0, 00:30:48.253 "w_mbytes_per_sec": 0 00:30:48.253 }, 00:30:48.253 "claimed": false, 00:30:48.253 "zoned": false, 00:30:48.253 "supported_io_types": { 00:30:48.253 "read": true, 00:30:48.253 "write": true, 00:30:48.253 "unmap": true, 00:30:48.253 "flush": false, 00:30:48.253 "reset": true, 00:30:48.253 "nvme_admin": false, 00:30:48.253 "nvme_io": false, 00:30:48.253 "nvme_io_md": false, 00:30:48.253 "write_zeroes": true, 00:30:48.253 "zcopy": false, 00:30:48.253 "get_zone_info": false, 00:30:48.253 "zone_management": false, 00:30:48.253 "zone_append": false, 00:30:48.253 "compare": false, 00:30:48.253 "compare_and_write": false, 00:30:48.253 "abort": false, 00:30:48.253 "seek_hole": true, 00:30:48.253 "seek_data": true, 00:30:48.253 "copy": false, 00:30:48.253 "nvme_iov_md": false 00:30:48.253 }, 00:30:48.253 "driver_specific": { 00:30:48.253 "lvol": { 00:30:48.253 "lvol_store_uuid": "49615d30-6c7c-4e2f-9373-7fb7ee87cb60", 00:30:48.253 "base_bdev": "nvme0n1", 00:30:48.253 "thin_provision": true, 00:30:48.253 "num_allocated_clusters": 0, 00:30:48.253 "snapshot": false, 00:30:48.253 "clone": false, 00:30:48.253 "esnap_clone": false 00:30:48.253 } 00:30:48.253 } 00:30:48.253 } 00:30:48.253 ]' 00:30:48.253 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:48.253 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:30:48.253 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:48.253 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:48.253 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:48.253 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:30:48.253 16:45:24 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:30:48.253 16:45:24 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:30:48.253 16:45:24 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:48.512 16:45:24 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:48.512 16:45:24 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:48.512 16:45:24 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 67aaa761-5702-40f8-af98-99eee5eaff8c 00:30:48.512 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=67aaa761-5702-40f8-af98-99eee5eaff8c 00:30:48.512 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:48.512 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:30:48.512 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:30:48.512 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 67aaa761-5702-40f8-af98-99eee5eaff8c 00:30:48.771 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:48.771 { 00:30:48.771 "name": "67aaa761-5702-40f8-af98-99eee5eaff8c", 00:30:48.771 "aliases": [ 00:30:48.771 "lvs/nvme0n1p0" 00:30:48.771 ], 00:30:48.771 "product_name": "Logical Volume", 00:30:48.771 "block_size": 4096, 00:30:48.771 "num_blocks": 26476544, 00:30:48.771 "uuid": "67aaa761-5702-40f8-af98-99eee5eaff8c", 00:30:48.771 "assigned_rate_limits": { 00:30:48.771 "rw_ios_per_sec": 0, 00:30:48.771 "rw_mbytes_per_sec": 0, 00:30:48.771 "r_mbytes_per_sec": 0, 00:30:48.771 "w_mbytes_per_sec": 0 00:30:48.771 }, 00:30:48.771 "claimed": false, 00:30:48.771 "zoned": false, 00:30:48.771 "supported_io_types": { 00:30:48.771 "read": true, 00:30:48.771 "write": true, 00:30:48.771 "unmap": true, 00:30:48.771 "flush": false, 00:30:48.771 "reset": true, 00:30:48.771 "nvme_admin": false, 00:30:48.771 "nvme_io": false, 00:30:48.771 "nvme_io_md": false, 00:30:48.771 "write_zeroes": true, 00:30:48.771 "zcopy": false, 00:30:48.771 "get_zone_info": false, 00:30:48.771 "zone_management": false, 00:30:48.771 "zone_append": false, 00:30:48.771 "compare": false, 00:30:48.771 "compare_and_write": false, 00:30:48.771 "abort": false, 00:30:48.771 "seek_hole": true, 00:30:48.771 "seek_data": true, 00:30:48.771 "copy": false, 00:30:48.771 "nvme_iov_md": false 00:30:48.771 }, 00:30:48.771 "driver_specific": { 00:30:48.771 "lvol": { 00:30:48.771 "lvol_store_uuid": "49615d30-6c7c-4e2f-9373-7fb7ee87cb60", 00:30:48.771 "base_bdev": "nvme0n1", 00:30:48.771 "thin_provision": true, 00:30:48.771 "num_allocated_clusters": 0, 00:30:48.771 "snapshot": false, 00:30:48.771 "clone": false, 00:30:48.771 "esnap_clone": false 00:30:48.771 } 00:30:48.771 } 00:30:48.771 } 00:30:48.771 ]' 00:30:48.771 16:45:24 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:48.771 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:30:48.771 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:49.031 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:49.031 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:49.031 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:30:49.031 16:45:25 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:30:49.031 16:45:25 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:49.031 16:45:25 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:30:49.031 16:45:25 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:30:49.031 16:45:25 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 67aaa761-5702-40f8-af98-99eee5eaff8c 00:30:49.031 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=67aaa761-5702-40f8-af98-99eee5eaff8c 00:30:49.031 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:30:49.031 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:30:49.031 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:30:49.031 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 67aaa761-5702-40f8-af98-99eee5eaff8c 00:30:49.291 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:30:49.291 { 00:30:49.291 "name": "67aaa761-5702-40f8-af98-99eee5eaff8c", 00:30:49.291 "aliases": [ 00:30:49.291 "lvs/nvme0n1p0" 00:30:49.291 ], 00:30:49.291 "product_name": "Logical Volume", 00:30:49.291 "block_size": 4096, 00:30:49.291 "num_blocks": 26476544, 00:30:49.291 "uuid": "67aaa761-5702-40f8-af98-99eee5eaff8c", 00:30:49.291 "assigned_rate_limits": { 00:30:49.291 "rw_ios_per_sec": 0, 00:30:49.291 "rw_mbytes_per_sec": 0, 00:30:49.291 "r_mbytes_per_sec": 0, 00:30:49.291 "w_mbytes_per_sec": 0 00:30:49.291 }, 00:30:49.291 "claimed": false, 00:30:49.291 "zoned": false, 00:30:49.291 "supported_io_types": { 00:30:49.291 "read": true, 00:30:49.291 "write": true, 00:30:49.291 "unmap": true, 00:30:49.291 "flush": false, 00:30:49.291 "reset": true, 00:30:49.291 "nvme_admin": false, 00:30:49.291 "nvme_io": false, 00:30:49.291 "nvme_io_md": false, 00:30:49.291 "write_zeroes": true, 00:30:49.291 "zcopy": false, 00:30:49.291 "get_zone_info": false, 00:30:49.291 "zone_management": false, 00:30:49.291 "zone_append": false, 00:30:49.291 "compare": false, 00:30:49.291 "compare_and_write": false, 00:30:49.291 "abort": false, 00:30:49.291 "seek_hole": true, 00:30:49.291 "seek_data": true, 00:30:49.291 "copy": false, 00:30:49.291 "nvme_iov_md": false 00:30:49.291 }, 00:30:49.291 "driver_specific": { 00:30:49.291 "lvol": { 00:30:49.291 "lvol_store_uuid": "49615d30-6c7c-4e2f-9373-7fb7ee87cb60", 00:30:49.291 "base_bdev": "nvme0n1", 00:30:49.291 "thin_provision": true, 00:30:49.291 "num_allocated_clusters": 0, 00:30:49.291 "snapshot": false, 00:30:49.291 "clone": false, 00:30:49.291 "esnap_clone": false 00:30:49.291 } 00:30:49.291 } 00:30:49.291 } 00:30:49.291 ]' 00:30:49.291 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:30:49.291 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:30:49.291 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:30:49.552 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:30:49.552 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:30:49.552 16:45:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:30:49.552 16:45:25 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:30:49.552 16:45:25 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 67aaa761-5702-40f8-af98-99eee5eaff8c -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:30:49.552 [2024-10-17 16:45:25.801348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.552 [2024-10-17 16:45:25.801404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:49.552 [2024-10-17 16:45:25.801425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:49.552 [2024-10-17 16:45:25.801453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.552 [2024-10-17 16:45:25.805051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.552 [2024-10-17 16:45:25.805093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:49.552 [2024-10-17 16:45:25.805113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.568 ms 00:30:49.552 [2024-10-17 16:45:25.805124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.552 [2024-10-17 16:45:25.805289] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:49.552 [2024-10-17 16:45:25.806374] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:49.552 [2024-10-17 16:45:25.806415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.552 [2024-10-17 16:45:25.806427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:49.552 [2024-10-17 16:45:25.806441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.136 ms 00:30:49.552 [2024-10-17 16:45:25.806451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.552 [2024-10-17 16:45:25.806649] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 86c7f8b6-9b0b-494b-b85d-53cff9f6f843 00:30:49.552 [2024-10-17 16:45:25.808139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.552 [2024-10-17 16:45:25.808288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:49.552 [2024-10-17 16:45:25.808312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:30:49.552 [2024-10-17 16:45:25.808328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.552 [2024-10-17 16:45:25.816087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.552 [2024-10-17 16:45:25.816124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:49.552 [2024-10-17 16:45:25.816137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.634 ms 00:30:49.552 [2024-10-17 16:45:25.816150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.552 [2024-10-17 16:45:25.816307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.552 [2024-10-17 16:45:25.816327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:49.552 [2024-10-17 16:45:25.816339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:30:49.552 [2024-10-17 16:45:25.816356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.552 [2024-10-17 16:45:25.816410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.552 [2024-10-17 16:45:25.816449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:49.552 [2024-10-17 16:45:25.816469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:49.552 [2024-10-17 16:45:25.816490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.552 [2024-10-17 16:45:25.816547] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:30:49.552 [2024-10-17 16:45:25.821895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.552 [2024-10-17 16:45:25.821928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:49.552 [2024-10-17 16:45:25.821943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.363 ms 00:30:49.552 [2024-10-17 16:45:25.821954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.552 [2024-10-17 16:45:25.822033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.552 [2024-10-17 16:45:25.822046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:49.552 [2024-10-17 16:45:25.822059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:49.552 [2024-10-17 16:45:25.822086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.552 [2024-10-17 16:45:25.822122] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:49.552 [2024-10-17 16:45:25.822250] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:49.552 [2024-10-17 16:45:25.822270] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:49.552 [2024-10-17 16:45:25.822284] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:49.552 [2024-10-17 16:45:25.822300] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:49.552 [2024-10-17 16:45:25.822312] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:49.552 [2024-10-17 16:45:25.822326] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:30:49.552 [2024-10-17 16:45:25.822337] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:49.552 [2024-10-17 16:45:25.822349] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:49.552 [2024-10-17 16:45:25.822358] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:49.552 [2024-10-17 16:45:25.822372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.552 [2024-10-17 16:45:25.822382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:49.552 [2024-10-17 16:45:25.822399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:30:49.552 [2024-10-17 16:45:25.822409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.552 [2024-10-17 16:45:25.822501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.552 [2024-10-17 16:45:25.822512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:49.552 [2024-10-17 16:45:25.822524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:49.552 [2024-10-17 16:45:25.822534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.552 [2024-10-17 16:45:25.822653] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:49.552 [2024-10-17 16:45:25.822665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:49.552 [2024-10-17 16:45:25.822678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:49.552 [2024-10-17 16:45:25.822691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.552 [2024-10-17 16:45:25.822720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:49.552 [2024-10-17 16:45:25.822730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:49.552 [2024-10-17 16:45:25.822742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:30:49.552 [2024-10-17 16:45:25.822751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:49.552 [2024-10-17 16:45:25.822763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:30:49.552 [2024-10-17 16:45:25.822772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:49.552 [2024-10-17 16:45:25.822784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:49.552 [2024-10-17 16:45:25.822794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:30:49.552 [2024-10-17 16:45:25.822805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:49.552 [2024-10-17 16:45:25.822815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:49.552 [2024-10-17 16:45:25.822826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:30:49.552 [2024-10-17 16:45:25.822853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.552 [2024-10-17 16:45:25.822867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:49.552 [2024-10-17 16:45:25.822877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:30:49.552 [2024-10-17 16:45:25.822889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.552 [2024-10-17 16:45:25.822898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:49.552 [2024-10-17 16:45:25.822911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:30:49.552 [2024-10-17 16:45:25.822920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.552 [2024-10-17 16:45:25.822932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:49.553 [2024-10-17 16:45:25.822941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:30:49.553 [2024-10-17 16:45:25.822953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.553 [2024-10-17 16:45:25.822962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:49.553 [2024-10-17 16:45:25.822974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:30:49.553 [2024-10-17 16:45:25.822984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.553 [2024-10-17 16:45:25.822996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:49.553 [2024-10-17 16:45:25.823005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:30:49.553 [2024-10-17 16:45:25.823019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:49.553 [2024-10-17 16:45:25.823028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:49.553 [2024-10-17 16:45:25.823042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:30:49.553 [2024-10-17 16:45:25.823052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:49.553 [2024-10-17 16:45:25.823063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:49.553 [2024-10-17 16:45:25.823072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:30:49.553 [2024-10-17 16:45:25.823083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:49.553 [2024-10-17 16:45:25.823092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:49.553 [2024-10-17 16:45:25.823104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:30:49.553 [2024-10-17 16:45:25.823113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.553 [2024-10-17 16:45:25.823124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:49.553 [2024-10-17 16:45:25.823133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:30:49.553 [2024-10-17 16:45:25.823144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.553 [2024-10-17 16:45:25.823153] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:49.553 [2024-10-17 16:45:25.823165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:49.553 [2024-10-17 16:45:25.823175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:49.553 [2024-10-17 16:45:25.823186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:49.553 [2024-10-17 16:45:25.823197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:49.553 [2024-10-17 16:45:25.823212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:49.553 [2024-10-17 16:45:25.823221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:49.553 [2024-10-17 16:45:25.823233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:49.553 [2024-10-17 16:45:25.823241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:49.553 [2024-10-17 16:45:25.823253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:49.553 [2024-10-17 16:45:25.823266] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:49.553 [2024-10-17 16:45:25.823280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:49.553 [2024-10-17 16:45:25.823291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:30:49.553 [2024-10-17 16:45:25.823305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:30:49.553 [2024-10-17 16:45:25.823316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:30:49.553 [2024-10-17 16:45:25.823328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:30:49.553 [2024-10-17 16:45:25.823338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:30:49.553 [2024-10-17 16:45:25.823351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:30:49.553 [2024-10-17 16:45:25.823361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:30:49.553 [2024-10-17 16:45:25.823374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:30:49.553 [2024-10-17 16:45:25.823385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:30:49.553 [2024-10-17 16:45:25.823399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:30:49.553 [2024-10-17 16:45:25.823410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:30:49.553 [2024-10-17 16:45:25.823422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:30:49.553 [2024-10-17 16:45:25.823432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:30:49.553 [2024-10-17 16:45:25.823444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:30:49.553 [2024-10-17 16:45:25.823454] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:49.553 [2024-10-17 16:45:25.823469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:49.553 [2024-10-17 16:45:25.823480] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:49.553 [2024-10-17 16:45:25.823495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:49.553 [2024-10-17 16:45:25.823505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:49.553 [2024-10-17 16:45:25.823518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:49.553 [2024-10-17 16:45:25.823528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:49.553 [2024-10-17 16:45:25.823548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:49.553 [2024-10-17 16:45:25.823558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:30:49.553 [2024-10-17 16:45:25.823570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:49.553 [2024-10-17 16:45:25.823656] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:49.553 [2024-10-17 16:45:25.823674] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:52.843 [2024-10-17 16:45:28.885195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.843 [2024-10-17 16:45:28.885451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:52.843 [2024-10-17 16:45:28.885483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3066.508 ms 00:30:52.843 [2024-10-17 16:45:28.885497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.843 [2024-10-17 16:45:28.923952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.843 [2024-10-17 16:45:28.924169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:52.843 [2024-10-17 16:45:28.924195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.115 ms 00:30:52.843 [2024-10-17 16:45:28.924209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.843 [2024-10-17 16:45:28.924368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.843 [2024-10-17 16:45:28.924392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:52.843 [2024-10-17 16:45:28.924405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:52.843 [2024-10-17 16:45:28.924424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.843 [2024-10-17 16:45:28.982530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.843 [2024-10-17 16:45:28.982589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:52.843 [2024-10-17 16:45:28.982609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.144 ms 00:30:52.843 [2024-10-17 16:45:28.982633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.843 [2024-10-17 16:45:28.982777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.843 [2024-10-17 16:45:28.982799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:52.843 [2024-10-17 16:45:28.982814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:52.843 [2024-10-17 16:45:28.982830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.843 [2024-10-17 16:45:28.983313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.843 [2024-10-17 16:45:28.983334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:52.843 [2024-10-17 16:45:28.983348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:30:52.843 [2024-10-17 16:45:28.983365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.843 [2024-10-17 16:45:28.983513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.843 [2024-10-17 16:45:28.983531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:52.843 [2024-10-17 16:45:28.983545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:30:52.843 [2024-10-17 16:45:28.983564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.843 [2024-10-17 16:45:29.005591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.843 [2024-10-17 16:45:29.005640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:52.843 [2024-10-17 16:45:29.005656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.001 ms 00:30:52.843 [2024-10-17 16:45:29.005669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.843 [2024-10-17 16:45:29.018550] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:30:52.843 [2024-10-17 16:45:29.035245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.843 [2024-10-17 16:45:29.035303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:52.843 [2024-10-17 16:45:29.035322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.445 ms 00:30:52.843 [2024-10-17 16:45:29.035333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.843 [2024-10-17 16:45:29.130658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.843 [2024-10-17 16:45:29.130721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:52.843 [2024-10-17 16:45:29.130742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.349 ms 00:30:52.843 [2024-10-17 16:45:29.130758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:52.843 [2024-10-17 16:45:29.131013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:52.843 [2024-10-17 16:45:29.131028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:52.843 [2024-10-17 16:45:29.131045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:30:52.843 [2024-10-17 16:45:29.131056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.103 [2024-10-17 16:45:29.169279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.103 [2024-10-17 16:45:29.169330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:53.103 [2024-10-17 16:45:29.169354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.243 ms 00:30:53.103 [2024-10-17 16:45:29.169364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.103 [2024-10-17 16:45:29.207007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.103 [2024-10-17 16:45:29.207059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:53.103 [2024-10-17 16:45:29.207079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.606 ms 00:30:53.103 [2024-10-17 16:45:29.207089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.103 [2024-10-17 16:45:29.207943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.103 [2024-10-17 16:45:29.207974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:53.103 [2024-10-17 16:45:29.207989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:30:53.103 [2024-10-17 16:45:29.208000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.103 [2024-10-17 16:45:29.308223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.103 [2024-10-17 16:45:29.308473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:53.103 [2024-10-17 16:45:29.308508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.333 ms 00:30:53.103 [2024-10-17 16:45:29.308520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.103 [2024-10-17 16:45:29.348710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.103 [2024-10-17 16:45:29.348771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:53.103 [2024-10-17 16:45:29.348791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.070 ms 00:30:53.103 [2024-10-17 16:45:29.348803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.103 [2024-10-17 16:45:29.387531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.103 [2024-10-17 16:45:29.387775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:53.103 [2024-10-17 16:45:29.387804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.679 ms 00:30:53.103 [2024-10-17 16:45:29.387815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.362 [2024-10-17 16:45:29.426032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.362 [2024-10-17 16:45:29.426093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:53.362 [2024-10-17 16:45:29.426112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.169 ms 00:30:53.362 [2024-10-17 16:45:29.426140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.362 [2024-10-17 16:45:29.426259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.362 [2024-10-17 16:45:29.426272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:53.362 [2024-10-17 16:45:29.426290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:53.362 [2024-10-17 16:45:29.426300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.362 [2024-10-17 16:45:29.426394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:53.362 [2024-10-17 16:45:29.426406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:53.362 [2024-10-17 16:45:29.426418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:53.362 [2024-10-17 16:45:29.426429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:53.362 [2024-10-17 16:45:29.427495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:53.362 [2024-10-17 16:45:29.432256] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3631.720 ms, result 0 00:30:53.362 [2024-10-17 16:45:29.433147] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:53.362 { 00:30:53.362 "name": "ftl0", 00:30:53.362 "uuid": "86c7f8b6-9b0b-494b-b85d-53cff9f6f843" 00:30:53.362 } 00:30:53.362 16:45:29 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:30:53.362 16:45:29 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:30:53.362 16:45:29 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:53.362 16:45:29 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:30:53.362 16:45:29 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:53.362 16:45:29 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:53.362 16:45:29 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:53.622 16:45:29 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:30:53.622 [ 00:30:53.622 { 00:30:53.622 "name": "ftl0", 00:30:53.622 "aliases": [ 00:30:53.622 "86c7f8b6-9b0b-494b-b85d-53cff9f6f843" 00:30:53.622 ], 00:30:53.622 "product_name": "FTL disk", 00:30:53.622 "block_size": 4096, 00:30:53.622 "num_blocks": 23592960, 00:30:53.622 "uuid": "86c7f8b6-9b0b-494b-b85d-53cff9f6f843", 00:30:53.622 "assigned_rate_limits": { 00:30:53.622 "rw_ios_per_sec": 0, 00:30:53.622 "rw_mbytes_per_sec": 0, 00:30:53.622 "r_mbytes_per_sec": 0, 00:30:53.622 "w_mbytes_per_sec": 0 00:30:53.622 }, 00:30:53.622 "claimed": false, 00:30:53.622 "zoned": false, 00:30:53.622 "supported_io_types": { 00:30:53.622 "read": true, 00:30:53.622 "write": true, 00:30:53.622 "unmap": true, 00:30:53.622 "flush": true, 00:30:53.622 "reset": false, 00:30:53.622 "nvme_admin": false, 00:30:53.622 "nvme_io": false, 00:30:53.622 "nvme_io_md": false, 00:30:53.622 "write_zeroes": true, 00:30:53.622 "zcopy": false, 00:30:53.622 "get_zone_info": false, 00:30:53.622 "zone_management": false, 00:30:53.622 "zone_append": false, 00:30:53.622 "compare": false, 00:30:53.622 "compare_and_write": false, 00:30:53.622 "abort": false, 00:30:53.622 "seek_hole": false, 00:30:53.622 "seek_data": false, 00:30:53.622 "copy": false, 00:30:53.622 "nvme_iov_md": false 00:30:53.622 }, 00:30:53.622 "driver_specific": { 00:30:53.622 "ftl": { 00:30:53.622 "base_bdev": "67aaa761-5702-40f8-af98-99eee5eaff8c", 00:30:53.622 "cache": "nvc0n1p0" 00:30:53.622 } 00:30:53.622 } 00:30:53.622 } 00:30:53.622 ] 00:30:53.622 16:45:29 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:30:53.622 16:45:29 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:30:53.622 16:45:29 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:53.895 16:45:30 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:30:53.895 16:45:30 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:30:54.154 16:45:30 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:30:54.154 { 00:30:54.154 "name": "ftl0", 00:30:54.154 "aliases": [ 00:30:54.154 "86c7f8b6-9b0b-494b-b85d-53cff9f6f843" 00:30:54.154 ], 00:30:54.154 "product_name": "FTL disk", 00:30:54.154 "block_size": 4096, 00:30:54.154 "num_blocks": 23592960, 00:30:54.154 "uuid": "86c7f8b6-9b0b-494b-b85d-53cff9f6f843", 00:30:54.154 "assigned_rate_limits": { 00:30:54.154 "rw_ios_per_sec": 0, 00:30:54.154 "rw_mbytes_per_sec": 0, 00:30:54.154 "r_mbytes_per_sec": 0, 00:30:54.154 "w_mbytes_per_sec": 0 00:30:54.154 }, 00:30:54.154 "claimed": false, 00:30:54.154 "zoned": false, 00:30:54.154 "supported_io_types": { 00:30:54.154 "read": true, 00:30:54.154 "write": true, 00:30:54.154 "unmap": true, 00:30:54.154 "flush": true, 00:30:54.154 "reset": false, 00:30:54.154 "nvme_admin": false, 00:30:54.154 "nvme_io": false, 00:30:54.154 "nvme_io_md": false, 00:30:54.154 "write_zeroes": true, 00:30:54.154 "zcopy": false, 00:30:54.154 "get_zone_info": false, 00:30:54.154 "zone_management": false, 00:30:54.154 "zone_append": false, 00:30:54.154 "compare": false, 00:30:54.154 "compare_and_write": false, 00:30:54.154 "abort": false, 00:30:54.154 "seek_hole": false, 00:30:54.154 "seek_data": false, 00:30:54.154 "copy": false, 00:30:54.154 "nvme_iov_md": false 00:30:54.154 }, 00:30:54.154 "driver_specific": { 00:30:54.154 "ftl": { 00:30:54.154 "base_bdev": "67aaa761-5702-40f8-af98-99eee5eaff8c", 00:30:54.154 "cache": "nvc0n1p0" 00:30:54.154 } 00:30:54.154 } 00:30:54.154 } 00:30:54.154 ]' 00:30:54.154 16:45:30 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:30:54.154 16:45:30 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:30:54.154 16:45:30 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:54.413 [2024-10-17 16:45:30.533076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.413 [2024-10-17 16:45:30.533139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:54.413 [2024-10-17 16:45:30.533156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:54.413 [2024-10-17 16:45:30.533170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.413 [2024-10-17 16:45:30.533210] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:30:54.413 [2024-10-17 16:45:30.537424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.413 [2024-10-17 16:45:30.537459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:54.413 [2024-10-17 16:45:30.537482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.199 ms 00:30:54.413 [2024-10-17 16:45:30.537492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.413 [2024-10-17 16:45:30.538059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.413 [2024-10-17 16:45:30.538085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:54.413 [2024-10-17 16:45:30.538099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:30:54.413 [2024-10-17 16:45:30.538109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.413 [2024-10-17 16:45:30.540934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.413 [2024-10-17 16:45:30.540956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:54.413 [2024-10-17 16:45:30.540971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.799 ms 00:30:54.413 [2024-10-17 16:45:30.540984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.413 [2024-10-17 16:45:30.546606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.413 [2024-10-17 16:45:30.546656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:54.413 [2024-10-17 16:45:30.546673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.593 ms 00:30:54.413 [2024-10-17 16:45:30.546683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.413 [2024-10-17 16:45:30.584484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.413 [2024-10-17 16:45:30.584671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:54.413 [2024-10-17 16:45:30.584716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.758 ms 00:30:54.413 [2024-10-17 16:45:30.584728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.413 [2024-10-17 16:45:30.606419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.413 [2024-10-17 16:45:30.606465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:54.413 [2024-10-17 16:45:30.606484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.595 ms 00:30:54.413 [2024-10-17 16:45:30.606495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.413 [2024-10-17 16:45:30.606743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.413 [2024-10-17 16:45:30.606763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:54.413 [2024-10-17 16:45:30.606777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:30:54.413 [2024-10-17 16:45:30.606787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.413 [2024-10-17 16:45:30.643756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.413 [2024-10-17 16:45:30.643803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:54.413 [2024-10-17 16:45:30.643821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.989 ms 00:30:54.413 [2024-10-17 16:45:30.643831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.413 [2024-10-17 16:45:30.679778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.413 [2024-10-17 16:45:30.679833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:54.413 [2024-10-17 16:45:30.679856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.903 ms 00:30:54.413 [2024-10-17 16:45:30.679866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.673 [2024-10-17 16:45:30.716063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.673 [2024-10-17 16:45:30.716119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:54.673 [2024-10-17 16:45:30.716137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.152 ms 00:30:54.673 [2024-10-17 16:45:30.716148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.673 [2024-10-17 16:45:30.751973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.673 [2024-10-17 16:45:30.752192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:54.673 [2024-10-17 16:45:30.752220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.704 ms 00:30:54.673 [2024-10-17 16:45:30.752232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.673 [2024-10-17 16:45:30.752334] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:54.673 [2024-10-17 16:45:30.752353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.752994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:54.673 [2024-10-17 16:45:30.753252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:54.674 [2024-10-17 16:45:30.753652] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:54.674 [2024-10-17 16:45:30.753667] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86c7f8b6-9b0b-494b-b85d-53cff9f6f843 00:30:54.674 [2024-10-17 16:45:30.753678] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:54.674 [2024-10-17 16:45:30.753690] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:54.674 [2024-10-17 16:45:30.753710] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:54.674 [2024-10-17 16:45:30.753724] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:54.674 [2024-10-17 16:45:30.753734] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:54.674 [2024-10-17 16:45:30.753746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:54.674 [2024-10-17 16:45:30.753757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:54.674 [2024-10-17 16:45:30.753768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:54.674 [2024-10-17 16:45:30.753777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:54.674 [2024-10-17 16:45:30.753790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.674 [2024-10-17 16:45:30.753805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:54.674 [2024-10-17 16:45:30.753818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:30:54.674 [2024-10-17 16:45:30.753828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.674 [2024-10-17 16:45:30.774001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.674 [2024-10-17 16:45:30.774057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:54.674 [2024-10-17 16:45:30.774079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.159 ms 00:30:54.674 [2024-10-17 16:45:30.774090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.674 [2024-10-17 16:45:30.774678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.674 [2024-10-17 16:45:30.774694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:54.674 [2024-10-17 16:45:30.774725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:30:54.674 [2024-10-17 16:45:30.774735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.674 [2024-10-17 16:45:30.843116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.674 [2024-10-17 16:45:30.843366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:54.674 [2024-10-17 16:45:30.843396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.674 [2024-10-17 16:45:30.843408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.674 [2024-10-17 16:45:30.843591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.674 [2024-10-17 16:45:30.843603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:54.674 [2024-10-17 16:45:30.843617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.674 [2024-10-17 16:45:30.843628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.674 [2024-10-17 16:45:30.843742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.674 [2024-10-17 16:45:30.843756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:54.674 [2024-10-17 16:45:30.843773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.674 [2024-10-17 16:45:30.843784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.674 [2024-10-17 16:45:30.843824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.674 [2024-10-17 16:45:30.843835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:54.674 [2024-10-17 16:45:30.843848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.674 [2024-10-17 16:45:30.843858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.934 [2024-10-17 16:45:30.973925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.934 [2024-10-17 16:45:30.974190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:54.934 [2024-10-17 16:45:30.974221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.934 [2024-10-17 16:45:30.974232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.934 [2024-10-17 16:45:31.078372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.934 [2024-10-17 16:45:31.078437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:54.934 [2024-10-17 16:45:31.078455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.934 [2024-10-17 16:45:31.078466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.934 [2024-10-17 16:45:31.078605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.934 [2024-10-17 16:45:31.078618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:54.934 [2024-10-17 16:45:31.078652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.934 [2024-10-17 16:45:31.078663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.934 [2024-10-17 16:45:31.078742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.934 [2024-10-17 16:45:31.078758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:54.934 [2024-10-17 16:45:31.078771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.934 [2024-10-17 16:45:31.078781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.934 [2024-10-17 16:45:31.078915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.934 [2024-10-17 16:45:31.078929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:54.934 [2024-10-17 16:45:31.078942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.934 [2024-10-17 16:45:31.078952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.934 [2024-10-17 16:45:31.079011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.934 [2024-10-17 16:45:31.079023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:54.934 [2024-10-17 16:45:31.079039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.934 [2024-10-17 16:45:31.079049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.934 [2024-10-17 16:45:31.079102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.934 [2024-10-17 16:45:31.079113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:54.934 [2024-10-17 16:45:31.079129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.934 [2024-10-17 16:45:31.079142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.934 [2024-10-17 16:45:31.079201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:54.934 [2024-10-17 16:45:31.079213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:54.934 [2024-10-17 16:45:31.079229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:54.934 [2024-10-17 16:45:31.079238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.934 [2024-10-17 16:45:31.079431] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 547.223 ms, result 0 00:30:54.934 true 00:30:54.934 16:45:31 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75375 00:30:54.934 16:45:31 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 75375 ']' 00:30:54.934 16:45:31 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 75375 00:30:54.934 16:45:31 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:30:54.934 16:45:31 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:54.934 16:45:31 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75375 00:30:54.934 killing process with pid 75375 00:30:54.934 16:45:31 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:54.934 16:45:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:54.934 16:45:31 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75375' 00:30:54.934 16:45:31 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 75375 00:30:54.934 16:45:31 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 75375 00:31:00.213 16:45:36 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:31:01.149 65536+0 records in 00:31:01.149 65536+0 records out 00:31:01.149 268435456 bytes (268 MB, 256 MiB) copied, 1.02788 s, 261 MB/s 00:31:01.149 16:45:37 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:01.149 [2024-10-17 16:45:37.302182] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:31:01.149 [2024-10-17 16:45:37.302305] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75587 ] 00:31:01.408 [2024-10-17 16:45:37.474216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.408 [2024-10-17 16:45:37.592892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.667 [2024-10-17 16:45:37.951777] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:01.667 [2024-10-17 16:45:37.951844] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:01.927 [2024-10-17 16:45:38.114465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.927 [2024-10-17 16:45:38.114713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:01.927 [2024-10-17 16:45:38.114739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:01.927 [2024-10-17 16:45:38.114751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.927 [2024-10-17 16:45:38.117980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.927 [2024-10-17 16:45:38.118019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:01.927 [2024-10-17 16:45:38.118032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.202 ms 00:31:01.927 [2024-10-17 16:45:38.118043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.927 [2024-10-17 16:45:38.118155] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:01.927 [2024-10-17 16:45:38.119160] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:01.927 [2024-10-17 16:45:38.119192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.927 [2024-10-17 16:45:38.119204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:01.927 [2024-10-17 16:45:38.119216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:31:01.927 [2024-10-17 16:45:38.119226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.927 [2024-10-17 16:45:38.120712] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:01.927 [2024-10-17 16:45:38.139518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.927 [2024-10-17 16:45:38.139572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:01.927 [2024-10-17 16:45:38.139594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.835 ms 00:31:01.927 [2024-10-17 16:45:38.139605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.927 [2024-10-17 16:45:38.139755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.927 [2024-10-17 16:45:38.139788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:01.927 [2024-10-17 16:45:38.139800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:01.927 [2024-10-17 16:45:38.139817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.927 [2024-10-17 16:45:38.147170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.927 [2024-10-17 16:45:38.147385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:01.927 [2024-10-17 16:45:38.147409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.297 ms 00:31:01.927 [2024-10-17 16:45:38.147422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.927 [2024-10-17 16:45:38.147551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.927 [2024-10-17 16:45:38.147565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:01.927 [2024-10-17 16:45:38.147577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:31:01.927 [2024-10-17 16:45:38.147587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.927 [2024-10-17 16:45:38.147621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.927 [2024-10-17 16:45:38.147633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:01.927 [2024-10-17 16:45:38.147643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:01.927 [2024-10-17 16:45:38.147657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.927 [2024-10-17 16:45:38.147683] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:31:01.927 [2024-10-17 16:45:38.152791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.927 [2024-10-17 16:45:38.152842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:01.927 [2024-10-17 16:45:38.152857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.123 ms 00:31:01.927 [2024-10-17 16:45:38.152868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.927 [2024-10-17 16:45:38.152949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.927 [2024-10-17 16:45:38.152962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:01.927 [2024-10-17 16:45:38.152973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:01.927 [2024-10-17 16:45:38.152983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.927 [2024-10-17 16:45:38.153008] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:01.927 [2024-10-17 16:45:38.153031] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:01.927 [2024-10-17 16:45:38.153073] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:01.927 [2024-10-17 16:45:38.153091] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:01.927 [2024-10-17 16:45:38.153182] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:01.927 [2024-10-17 16:45:38.153196] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:01.927 [2024-10-17 16:45:38.153209] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:01.927 [2024-10-17 16:45:38.153223] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:01.927 [2024-10-17 16:45:38.153235] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:01.927 [2024-10-17 16:45:38.153246] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:31:01.927 [2024-10-17 16:45:38.153260] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:01.927 [2024-10-17 16:45:38.153270] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:01.927 [2024-10-17 16:45:38.153281] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:01.927 [2024-10-17 16:45:38.153291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.928 [2024-10-17 16:45:38.153301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:01.928 [2024-10-17 16:45:38.153311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:31:01.928 [2024-10-17 16:45:38.153321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.928 [2024-10-17 16:45:38.153397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.928 [2024-10-17 16:45:38.153409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:01.928 [2024-10-17 16:45:38.153420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:01.928 [2024-10-17 16:45:38.153433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.928 [2024-10-17 16:45:38.153522] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:01.928 [2024-10-17 16:45:38.153534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:01.928 [2024-10-17 16:45:38.153546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:01.928 [2024-10-17 16:45:38.153556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.928 [2024-10-17 16:45:38.153567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:01.928 [2024-10-17 16:45:38.153576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:01.928 [2024-10-17 16:45:38.153585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:31:01.928 [2024-10-17 16:45:38.153596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:01.928 [2024-10-17 16:45:38.153605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:31:01.928 [2024-10-17 16:45:38.153615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:01.928 [2024-10-17 16:45:38.153625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:01.928 [2024-10-17 16:45:38.153635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:31:01.928 [2024-10-17 16:45:38.153644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:01.928 [2024-10-17 16:45:38.153664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:01.928 [2024-10-17 16:45:38.153674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:31:01.928 [2024-10-17 16:45:38.153683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.928 [2024-10-17 16:45:38.153692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:01.928 [2024-10-17 16:45:38.153719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:31:01.928 [2024-10-17 16:45:38.153731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.928 [2024-10-17 16:45:38.153741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:01.928 [2024-10-17 16:45:38.153751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:31:01.928 [2024-10-17 16:45:38.153760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.928 [2024-10-17 16:45:38.153770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:01.928 [2024-10-17 16:45:38.153779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:31:01.928 [2024-10-17 16:45:38.153789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.928 [2024-10-17 16:45:38.153798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:01.928 [2024-10-17 16:45:38.153807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:31:01.928 [2024-10-17 16:45:38.153817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.928 [2024-10-17 16:45:38.153844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:01.928 [2024-10-17 16:45:38.153853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:31:01.928 [2024-10-17 16:45:38.153863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.928 [2024-10-17 16:45:38.153872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:01.928 [2024-10-17 16:45:38.153881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:31:01.928 [2024-10-17 16:45:38.153891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:01.928 [2024-10-17 16:45:38.153901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:01.928 [2024-10-17 16:45:38.153910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:31:01.928 [2024-10-17 16:45:38.153919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:01.928 [2024-10-17 16:45:38.153928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:01.928 [2024-10-17 16:45:38.153938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:31:01.928 [2024-10-17 16:45:38.153947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.928 [2024-10-17 16:45:38.153957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:01.928 [2024-10-17 16:45:38.153966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:31:01.928 [2024-10-17 16:45:38.153976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.928 [2024-10-17 16:45:38.153985] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:01.928 [2024-10-17 16:45:38.154014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:01.928 [2024-10-17 16:45:38.154024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:01.928 [2024-10-17 16:45:38.154034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.928 [2024-10-17 16:45:38.154044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:01.928 [2024-10-17 16:45:38.154053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:01.928 [2024-10-17 16:45:38.154063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:01.928 [2024-10-17 16:45:38.154075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:01.928 [2024-10-17 16:45:38.154084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:01.928 [2024-10-17 16:45:38.154094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:01.928 [2024-10-17 16:45:38.154104] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:01.928 [2024-10-17 16:45:38.154121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:01.928 [2024-10-17 16:45:38.154134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:31:01.928 [2024-10-17 16:45:38.154145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:31:01.928 [2024-10-17 16:45:38.154155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:31:01.928 [2024-10-17 16:45:38.154166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:31:01.928 [2024-10-17 16:45:38.154176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:31:01.928 [2024-10-17 16:45:38.154186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:31:01.928 [2024-10-17 16:45:38.154196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:31:01.928 [2024-10-17 16:45:38.154207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:31:01.928 [2024-10-17 16:45:38.154217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:31:01.928 [2024-10-17 16:45:38.154227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:31:01.928 [2024-10-17 16:45:38.154238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:31:01.928 [2024-10-17 16:45:38.154248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:31:01.928 [2024-10-17 16:45:38.154258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:31:01.928 [2024-10-17 16:45:38.154269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:31:01.928 [2024-10-17 16:45:38.154280] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:01.928 [2024-10-17 16:45:38.154291] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:01.928 [2024-10-17 16:45:38.154302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:01.928 [2024-10-17 16:45:38.154312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:01.928 [2024-10-17 16:45:38.154322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:01.928 [2024-10-17 16:45:38.154333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:01.928 [2024-10-17 16:45:38.154343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.928 [2024-10-17 16:45:38.154354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:01.928 [2024-10-17 16:45:38.154364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:31:01.928 [2024-10-17 16:45:38.154378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.928 [2024-10-17 16:45:38.194356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.928 [2024-10-17 16:45:38.194414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:01.928 [2024-10-17 16:45:38.194431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.981 ms 00:31:01.928 [2024-10-17 16:45:38.194443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.928 [2024-10-17 16:45:38.194615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.928 [2024-10-17 16:45:38.194628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:01.928 [2024-10-17 16:45:38.194640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:01.928 [2024-10-17 16:45:38.194656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.186 [2024-10-17 16:45:38.252577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.186 [2024-10-17 16:45:38.252636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:02.186 [2024-10-17 16:45:38.252651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.989 ms 00:31:02.186 [2024-10-17 16:45:38.252662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.186 [2024-10-17 16:45:38.252826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.186 [2024-10-17 16:45:38.252840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:02.186 [2024-10-17 16:45:38.252852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:02.187 [2024-10-17 16:45:38.252862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.187 [2024-10-17 16:45:38.253303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.187 [2024-10-17 16:45:38.253321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:02.187 [2024-10-17 16:45:38.253332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:31:02.187 [2024-10-17 16:45:38.253342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.187 [2024-10-17 16:45:38.253469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.187 [2024-10-17 16:45:38.253486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:02.187 [2024-10-17 16:45:38.253497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:31:02.187 [2024-10-17 16:45:38.253507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.187 [2024-10-17 16:45:38.271680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.187 [2024-10-17 16:45:38.271739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:02.187 [2024-10-17 16:45:38.271755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.177 ms 00:31:02.187 [2024-10-17 16:45:38.271766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.187 [2024-10-17 16:45:38.291027] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:02.187 [2024-10-17 16:45:38.291077] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:02.187 [2024-10-17 16:45:38.291094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.187 [2024-10-17 16:45:38.291106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:02.187 [2024-10-17 16:45:38.291118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.182 ms 00:31:02.187 [2024-10-17 16:45:38.291128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.187 [2024-10-17 16:45:38.321045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.187 [2024-10-17 16:45:38.321240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:02.187 [2024-10-17 16:45:38.321277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.867 ms 00:31:02.187 [2024-10-17 16:45:38.321289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.187 [2024-10-17 16:45:38.340902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.187 [2024-10-17 16:45:38.340948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:02.187 [2024-10-17 16:45:38.340962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.472 ms 00:31:02.187 [2024-10-17 16:45:38.340972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.187 [2024-10-17 16:45:38.359210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.187 [2024-10-17 16:45:38.359382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:02.187 [2024-10-17 16:45:38.359404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.176 ms 00:31:02.187 [2024-10-17 16:45:38.359416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.187 [2024-10-17 16:45:38.360292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.187 [2024-10-17 16:45:38.360318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:02.187 [2024-10-17 16:45:38.360334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:31:02.187 [2024-10-17 16:45:38.360345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.187 [2024-10-17 16:45:38.447875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.187 [2024-10-17 16:45:38.447942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:02.187 [2024-10-17 16:45:38.447957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.632 ms 00:31:02.187 [2024-10-17 16:45:38.447968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.187 [2024-10-17 16:45:38.459833] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:31:02.187 [2024-10-17 16:45:38.476595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.187 [2024-10-17 16:45:38.476653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:02.187 [2024-10-17 16:45:38.476670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.531 ms 00:31:02.187 [2024-10-17 16:45:38.476681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.187 [2024-10-17 16:45:38.476841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.187 [2024-10-17 16:45:38.476856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:02.187 [2024-10-17 16:45:38.476872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:02.187 [2024-10-17 16:45:38.476883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.187 [2024-10-17 16:45:38.476942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.187 [2024-10-17 16:45:38.476953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:02.187 [2024-10-17 16:45:38.476964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:02.187 [2024-10-17 16:45:38.476975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.187 [2024-10-17 16:45:38.476999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.187 [2024-10-17 16:45:38.477010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:02.187 [2024-10-17 16:45:38.477025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:02.187 [2024-10-17 16:45:38.477038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.187 [2024-10-17 16:45:38.477075] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:02.187 [2024-10-17 16:45:38.477093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.187 [2024-10-17 16:45:38.477103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:02.187 [2024-10-17 16:45:38.477116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:02.187 [2024-10-17 16:45:38.477129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.448 [2024-10-17 16:45:38.515349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.448 [2024-10-17 16:45:38.515401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:02.448 [2024-10-17 16:45:38.515423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.251 ms 00:31:02.448 [2024-10-17 16:45:38.515434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.448 [2024-10-17 16:45:38.515565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:02.448 [2024-10-17 16:45:38.515579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:02.448 [2024-10-17 16:45:38.515591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:02.448 [2024-10-17 16:45:38.515601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:02.448 [2024-10-17 16:45:38.516598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:02.448 [2024-10-17 16:45:38.521133] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 402.456 ms, result 0 00:31:02.448 [2024-10-17 16:45:38.521837] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:02.448 [2024-10-17 16:45:38.540680] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:03.441  [2024-10-17T16:45:40.674Z] Copying: 29/256 [MB] (29 MBps) [2024-10-17T16:45:41.610Z] Copying: 58/256 [MB] (28 MBps) [2024-10-17T16:45:42.547Z] Copying: 84/256 [MB] (26 MBps) [2024-10-17T16:45:43.926Z] Copying: 110/256 [MB] (26 MBps) [2024-10-17T16:45:44.863Z] Copying: 137/256 [MB] (26 MBps) [2024-10-17T16:45:45.799Z] Copying: 164/256 [MB] (27 MBps) [2024-10-17T16:45:46.736Z] Copying: 192/256 [MB] (27 MBps) [2024-10-17T16:45:47.674Z] Copying: 219/256 [MB] (26 MBps) [2024-10-17T16:45:47.934Z] Copying: 247/256 [MB] (28 MBps) [2024-10-17T16:45:47.934Z] Copying: 256/256 [MB] (average 27 MBps)[2024-10-17 16:45:47.835566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:11.635 [2024-10-17 16:45:47.850296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.635 [2024-10-17 16:45:47.850356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:11.635 [2024-10-17 16:45:47.850372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:11.635 [2024-10-17 16:45:47.850383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.635 [2024-10-17 16:45:47.850415] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:31:11.635 [2024-10-17 16:45:47.854584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.635 [2024-10-17 16:45:47.854613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:11.635 [2024-10-17 16:45:47.854634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.158 ms 00:31:11.635 [2024-10-17 16:45:47.854645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.635 [2024-10-17 16:45:47.856478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.635 [2024-10-17 16:45:47.856516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:11.635 [2024-10-17 16:45:47.856530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.811 ms 00:31:11.635 [2024-10-17 16:45:47.856540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.635 [2024-10-17 16:45:47.863654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.635 [2024-10-17 16:45:47.863692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:11.635 [2024-10-17 16:45:47.863716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.106 ms 00:31:11.635 [2024-10-17 16:45:47.863726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.635 [2024-10-17 16:45:47.869431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.635 [2024-10-17 16:45:47.869463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:11.635 [2024-10-17 16:45:47.869477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.641 ms 00:31:11.635 [2024-10-17 16:45:47.869488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.635 [2024-10-17 16:45:47.906932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.635 [2024-10-17 16:45:47.907103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:11.635 [2024-10-17 16:45:47.907125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.452 ms 00:31:11.635 [2024-10-17 16:45:47.907136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.635 [2024-10-17 16:45:47.928812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.635 [2024-10-17 16:45:47.928875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:11.635 [2024-10-17 16:45:47.928891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.633 ms 00:31:11.635 [2024-10-17 16:45:47.928902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.635 [2024-10-17 16:45:47.929052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.635 [2024-10-17 16:45:47.929070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:11.635 [2024-10-17 16:45:47.929081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:31:11.635 [2024-10-17 16:45:47.929091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.896 [2024-10-17 16:45:47.967548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.896 [2024-10-17 16:45:47.967611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:11.896 [2024-10-17 16:45:47.967627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.499 ms 00:31:11.896 [2024-10-17 16:45:47.967637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.896 [2024-10-17 16:45:48.004637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.896 [2024-10-17 16:45:48.004880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:11.896 [2024-10-17 16:45:48.004906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.946 ms 00:31:11.896 [2024-10-17 16:45:48.004918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.896 [2024-10-17 16:45:48.043982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.896 [2024-10-17 16:45:48.044210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:11.896 [2024-10-17 16:45:48.044234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.054 ms 00:31:11.896 [2024-10-17 16:45:48.044244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.896 [2024-10-17 16:45:48.079966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.896 [2024-10-17 16:45:48.080014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:11.896 [2024-10-17 16:45:48.080029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.672 ms 00:31:11.896 [2024-10-17 16:45:48.080039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.896 [2024-10-17 16:45:48.080128] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:11.896 [2024-10-17 16:45:48.080146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:11.896 [2024-10-17 16:45:48.080443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.080993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:11.897 [2024-10-17 16:45:48.081293] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:11.897 [2024-10-17 16:45:48.081308] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86c7f8b6-9b0b-494b-b85d-53cff9f6f843 00:31:11.897 [2024-10-17 16:45:48.081319] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:11.897 [2024-10-17 16:45:48.081340] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:11.897 [2024-10-17 16:45:48.081349] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:11.897 [2024-10-17 16:45:48.081360] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:11.897 [2024-10-17 16:45:48.081370] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:11.897 [2024-10-17 16:45:48.081380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:11.897 [2024-10-17 16:45:48.081390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:11.897 [2024-10-17 16:45:48.081399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:11.897 [2024-10-17 16:45:48.081409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:11.897 [2024-10-17 16:45:48.081419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.897 [2024-10-17 16:45:48.081429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:11.897 [2024-10-17 16:45:48.081440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:31:11.897 [2024-10-17 16:45:48.081450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.897 [2024-10-17 16:45:48.100889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.897 [2024-10-17 16:45:48.100930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:11.897 [2024-10-17 16:45:48.100943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.443 ms 00:31:11.897 [2024-10-17 16:45:48.100955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.897 [2024-10-17 16:45:48.101493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.897 [2024-10-17 16:45:48.101508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:11.897 [2024-10-17 16:45:48.101518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:31:11.897 [2024-10-17 16:45:48.101535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.897 [2024-10-17 16:45:48.155885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.897 [2024-10-17 16:45:48.155947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:11.897 [2024-10-17 16:45:48.155962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.897 [2024-10-17 16:45:48.155973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.897 [2024-10-17 16:45:48.156086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.898 [2024-10-17 16:45:48.156098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:11.898 [2024-10-17 16:45:48.156108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.898 [2024-10-17 16:45:48.156123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.898 [2024-10-17 16:45:48.156178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.898 [2024-10-17 16:45:48.156191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:11.898 [2024-10-17 16:45:48.156213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.898 [2024-10-17 16:45:48.156223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.898 [2024-10-17 16:45:48.156242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:11.898 [2024-10-17 16:45:48.156253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:11.898 [2024-10-17 16:45:48.156264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:11.898 [2024-10-17 16:45:48.156274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.158 [2024-10-17 16:45:48.279810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.158 [2024-10-17 16:45:48.279874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:12.158 [2024-10-17 16:45:48.279889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.158 [2024-10-17 16:45:48.279899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.158 [2024-10-17 16:45:48.383034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.158 [2024-10-17 16:45:48.383095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:12.158 [2024-10-17 16:45:48.383110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.158 [2024-10-17 16:45:48.383128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.158 [2024-10-17 16:45:48.383225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.158 [2024-10-17 16:45:48.383238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:12.158 [2024-10-17 16:45:48.383250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.158 [2024-10-17 16:45:48.383260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.158 [2024-10-17 16:45:48.383289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.158 [2024-10-17 16:45:48.383300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:12.158 [2024-10-17 16:45:48.383310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.158 [2024-10-17 16:45:48.383321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.158 [2024-10-17 16:45:48.383444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.158 [2024-10-17 16:45:48.383458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:12.158 [2024-10-17 16:45:48.383469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.158 [2024-10-17 16:45:48.383479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.158 [2024-10-17 16:45:48.383516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.158 [2024-10-17 16:45:48.383528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:12.158 [2024-10-17 16:45:48.383539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.158 [2024-10-17 16:45:48.383549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.158 [2024-10-17 16:45:48.383594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.158 [2024-10-17 16:45:48.383605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:12.158 [2024-10-17 16:45:48.383616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.158 [2024-10-17 16:45:48.383626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.158 [2024-10-17 16:45:48.383672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:12.158 [2024-10-17 16:45:48.383684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:12.158 [2024-10-17 16:45:48.383695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:12.158 [2024-10-17 16:45:48.383729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:12.158 [2024-10-17 16:45:48.383875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 534.461 ms, result 0 00:31:13.539 00:31:13.539 00:31:13.539 16:45:49 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=75717 00:31:13.539 16:45:49 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:31:13.539 16:45:49 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 75717 00:31:13.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.539 16:45:49 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 75717 ']' 00:31:13.539 16:45:49 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.539 16:45:49 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:13.539 16:45:49 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.539 16:45:49 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:13.539 16:45:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:31:13.539 [2024-10-17 16:45:49.672416] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:31:13.539 [2024-10-17 16:45:49.672792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75717 ] 00:31:13.799 [2024-10-17 16:45:49.846021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.799 [2024-10-17 16:45:49.967354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.750 16:45:50 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:14.750 16:45:50 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:31:14.750 16:45:50 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:31:15.043 [2024-10-17 16:45:51.100805] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:15.043 [2024-10-17 16:45:51.100886] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:15.043 [2024-10-17 16:45:51.284643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.043 [2024-10-17 16:45:51.284841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:15.043 [2024-10-17 16:45:51.284872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:15.043 [2024-10-17 16:45:51.284883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.043 [2024-10-17 16:45:51.288713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.043 [2024-10-17 16:45:51.288750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:15.043 [2024-10-17 16:45:51.288765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.806 ms 00:31:15.043 [2024-10-17 16:45:51.288776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.043 [2024-10-17 16:45:51.288882] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:15.043 [2024-10-17 16:45:51.289818] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:15.043 [2024-10-17 16:45:51.289854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.043 [2024-10-17 16:45:51.289865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:15.043 [2024-10-17 16:45:51.289879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:31:15.043 [2024-10-17 16:45:51.289889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.043 [2024-10-17 16:45:51.291338] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:15.043 [2024-10-17 16:45:51.310988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.043 [2024-10-17 16:45:51.311043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:15.043 [2024-10-17 16:45:51.311058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.691 ms 00:31:15.043 [2024-10-17 16:45:51.311073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.043 [2024-10-17 16:45:51.311173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.043 [2024-10-17 16:45:51.311189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:15.043 [2024-10-17 16:45:51.311201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:31:15.043 [2024-10-17 16:45:51.311213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.043 [2024-10-17 16:45:51.317821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.043 [2024-10-17 16:45:51.317859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:15.043 [2024-10-17 16:45:51.317871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.570 ms 00:31:15.043 [2024-10-17 16:45:51.317884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.043 [2024-10-17 16:45:51.318020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.043 [2024-10-17 16:45:51.318040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:15.043 [2024-10-17 16:45:51.318051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:31:15.043 [2024-10-17 16:45:51.318066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.043 [2024-10-17 16:45:51.318094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.043 [2024-10-17 16:45:51.318117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:15.043 [2024-10-17 16:45:51.318127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:15.043 [2024-10-17 16:45:51.318140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.043 [2024-10-17 16:45:51.318166] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:31:15.043 [2024-10-17 16:45:51.323074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.043 [2024-10-17 16:45:51.323104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:15.043 [2024-10-17 16:45:51.323119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.919 ms 00:31:15.043 [2024-10-17 16:45:51.323129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.043 [2024-10-17 16:45:51.323201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.043 [2024-10-17 16:45:51.323214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:15.043 [2024-10-17 16:45:51.323227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:15.044 [2024-10-17 16:45:51.323237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.044 [2024-10-17 16:45:51.323263] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:15.044 [2024-10-17 16:45:51.323287] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:15.044 [2024-10-17 16:45:51.323334] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:15.044 [2024-10-17 16:45:51.323354] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:15.044 [2024-10-17 16:45:51.323449] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:15.044 [2024-10-17 16:45:51.323462] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:15.044 [2024-10-17 16:45:51.323484] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:15.044 [2024-10-17 16:45:51.323498] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:15.044 [2024-10-17 16:45:51.323521] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:15.044 [2024-10-17 16:45:51.323533] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:31:15.044 [2024-10-17 16:45:51.323547] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:15.044 [2024-10-17 16:45:51.323558] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:15.044 [2024-10-17 16:45:51.323576] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:15.044 [2024-10-17 16:45:51.323587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.044 [2024-10-17 16:45:51.323602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:15.044 [2024-10-17 16:45:51.323613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:31:15.044 [2024-10-17 16:45:51.323627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.044 [2024-10-17 16:45:51.323724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.044 [2024-10-17 16:45:51.323742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:15.044 [2024-10-17 16:45:51.323757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:31:15.044 [2024-10-17 16:45:51.323771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.044 [2024-10-17 16:45:51.323861] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:15.044 [2024-10-17 16:45:51.323893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:15.044 [2024-10-17 16:45:51.323905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:15.044 [2024-10-17 16:45:51.323920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.044 [2024-10-17 16:45:51.323930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:15.044 [2024-10-17 16:45:51.323944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:15.044 [2024-10-17 16:45:51.323954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:31:15.044 [2024-10-17 16:45:51.323975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:15.044 [2024-10-17 16:45:51.323985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:31:15.044 [2024-10-17 16:45:51.324000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:15.044 [2024-10-17 16:45:51.324010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:15.044 [2024-10-17 16:45:51.324024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:31:15.044 [2024-10-17 16:45:51.324035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:15.044 [2024-10-17 16:45:51.324049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:15.044 [2024-10-17 16:45:51.324060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:31:15.044 [2024-10-17 16:45:51.324074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.044 [2024-10-17 16:45:51.324084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:15.044 [2024-10-17 16:45:51.324098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:31:15.044 [2024-10-17 16:45:51.324107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.044 [2024-10-17 16:45:51.324121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:15.044 [2024-10-17 16:45:51.324141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:31:15.044 [2024-10-17 16:45:51.324157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.044 [2024-10-17 16:45:51.324166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:15.044 [2024-10-17 16:45:51.324185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:31:15.044 [2024-10-17 16:45:51.324194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.044 [2024-10-17 16:45:51.324208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:15.044 [2024-10-17 16:45:51.324217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:31:15.044 [2024-10-17 16:45:51.324231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.044 [2024-10-17 16:45:51.324241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:15.044 [2024-10-17 16:45:51.324255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:31:15.044 [2024-10-17 16:45:51.324264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:15.044 [2024-10-17 16:45:51.324280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:15.044 [2024-10-17 16:45:51.324289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:31:15.044 [2024-10-17 16:45:51.324303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:15.044 [2024-10-17 16:45:51.324312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:15.044 [2024-10-17 16:45:51.324326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:31:15.044 [2024-10-17 16:45:51.324335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:15.044 [2024-10-17 16:45:51.324349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:15.044 [2024-10-17 16:45:51.324359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:31:15.044 [2024-10-17 16:45:51.324384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.044 [2024-10-17 16:45:51.324393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:15.044 [2024-10-17 16:45:51.324407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:31:15.044 [2024-10-17 16:45:51.324417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.044 [2024-10-17 16:45:51.324432] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:15.044 [2024-10-17 16:45:51.324442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:15.044 [2024-10-17 16:45:51.324457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:15.044 [2024-10-17 16:45:51.324472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:15.044 [2024-10-17 16:45:51.324487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:15.044 [2024-10-17 16:45:51.324497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:15.044 [2024-10-17 16:45:51.324511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:15.044 [2024-10-17 16:45:51.324520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:15.044 [2024-10-17 16:45:51.324534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:15.044 [2024-10-17 16:45:51.324543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:15.044 [2024-10-17 16:45:51.324559] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:15.044 [2024-10-17 16:45:51.324573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:15.044 [2024-10-17 16:45:51.324595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:31:15.044 [2024-10-17 16:45:51.324606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:31:15.044 [2024-10-17 16:45:51.324621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:31:15.044 [2024-10-17 16:45:51.324632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:31:15.044 [2024-10-17 16:45:51.324647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:31:15.044 [2024-10-17 16:45:51.324658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:31:15.044 [2024-10-17 16:45:51.324673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:31:15.044 [2024-10-17 16:45:51.324683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:31:15.044 [2024-10-17 16:45:51.324714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:31:15.044 [2024-10-17 16:45:51.324725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:31:15.044 [2024-10-17 16:45:51.324740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:31:15.044 [2024-10-17 16:45:51.324751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:31:15.044 [2024-10-17 16:45:51.324767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:31:15.044 [2024-10-17 16:45:51.324778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:31:15.044 [2024-10-17 16:45:51.324793] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:15.044 [2024-10-17 16:45:51.324805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:15.044 [2024-10-17 16:45:51.324826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:15.044 [2024-10-17 16:45:51.324837] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:15.044 [2024-10-17 16:45:51.324852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:15.044 [2024-10-17 16:45:51.324863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:15.044 [2024-10-17 16:45:51.324880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.044 [2024-10-17 16:45:51.324891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:15.044 [2024-10-17 16:45:51.324917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:31:15.044 [2024-10-17 16:45:51.324927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.305 [2024-10-17 16:45:51.366255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.305 [2024-10-17 16:45:51.366464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:15.305 [2024-10-17 16:45:51.366499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.324 ms 00:31:15.305 [2024-10-17 16:45:51.366510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.305 [2024-10-17 16:45:51.366662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.305 [2024-10-17 16:45:51.366680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:15.305 [2024-10-17 16:45:51.366696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:31:15.305 [2024-10-17 16:45:51.366723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.305 [2024-10-17 16:45:51.415686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.305 [2024-10-17 16:45:51.415747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:15.305 [2024-10-17 16:45:51.415769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.008 ms 00:31:15.305 [2024-10-17 16:45:51.415785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.305 [2024-10-17 16:45:51.415917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.305 [2024-10-17 16:45:51.415930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:15.305 [2024-10-17 16:45:51.415946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:15.305 [2024-10-17 16:45:51.415957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.305 [2024-10-17 16:45:51.416402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.305 [2024-10-17 16:45:51.416422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:15.305 [2024-10-17 16:45:51.416438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:31:15.305 [2024-10-17 16:45:51.416448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.305 [2024-10-17 16:45:51.416584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.305 [2024-10-17 16:45:51.416597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:15.305 [2024-10-17 16:45:51.416613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:31:15.305 [2024-10-17 16:45:51.416623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.305 [2024-10-17 16:45:51.438617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.305 [2024-10-17 16:45:51.438660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:15.305 [2024-10-17 16:45:51.438681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.997 ms 00:31:15.305 [2024-10-17 16:45:51.438691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.305 [2024-10-17 16:45:51.458314] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:31:15.305 [2024-10-17 16:45:51.458355] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:15.305 [2024-10-17 16:45:51.458377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.305 [2024-10-17 16:45:51.458389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:15.305 [2024-10-17 16:45:51.458405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.561 ms 00:31:15.305 [2024-10-17 16:45:51.458415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.305 [2024-10-17 16:45:51.488024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.305 [2024-10-17 16:45:51.488345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:15.305 [2024-10-17 16:45:51.488392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.541 ms 00:31:15.305 [2024-10-17 16:45:51.488405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.305 [2024-10-17 16:45:51.507305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.305 [2024-10-17 16:45:51.507457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:15.305 [2024-10-17 16:45:51.507495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.745 ms 00:31:15.305 [2024-10-17 16:45:51.507506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.305 [2024-10-17 16:45:51.525629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.305 [2024-10-17 16:45:51.525669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:15.305 [2024-10-17 16:45:51.525688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.056 ms 00:31:15.305 [2024-10-17 16:45:51.525709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.305 [2024-10-17 16:45:51.526502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.305 [2024-10-17 16:45:51.526533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:15.305 [2024-10-17 16:45:51.526551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:31:15.305 [2024-10-17 16:45:51.526561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.563 [2024-10-17 16:45:51.621705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.563 [2024-10-17 16:45:51.621773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:15.563 [2024-10-17 16:45:51.621797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.255 ms 00:31:15.563 [2024-10-17 16:45:51.621808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.563 [2024-10-17 16:45:51.633369] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:31:15.563 [2024-10-17 16:45:51.649864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.563 [2024-10-17 16:45:51.649950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:15.563 [2024-10-17 16:45:51.649968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.979 ms 00:31:15.563 [2024-10-17 16:45:51.649986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.563 [2024-10-17 16:45:51.650139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.563 [2024-10-17 16:45:51.650160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:15.563 [2024-10-17 16:45:51.650172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:15.563 [2024-10-17 16:45:51.650187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.563 [2024-10-17 16:45:51.650244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.563 [2024-10-17 16:45:51.650260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:15.564 [2024-10-17 16:45:51.650272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:15.564 [2024-10-17 16:45:51.650287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.564 [2024-10-17 16:45:51.650314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.564 [2024-10-17 16:45:51.650337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:15.564 [2024-10-17 16:45:51.650347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:15.564 [2024-10-17 16:45:51.650366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.564 [2024-10-17 16:45:51.650407] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:15.564 [2024-10-17 16:45:51.650430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.564 [2024-10-17 16:45:51.650440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:15.564 [2024-10-17 16:45:51.650456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:15.564 [2024-10-17 16:45:51.650471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.564 [2024-10-17 16:45:51.688450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.564 [2024-10-17 16:45:51.688510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:15.564 [2024-10-17 16:45:51.688532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.003 ms 00:31:15.564 [2024-10-17 16:45:51.688543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.564 [2024-10-17 16:45:51.688681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.564 [2024-10-17 16:45:51.688695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:15.564 [2024-10-17 16:45:51.688731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:15.564 [2024-10-17 16:45:51.688742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.564 [2024-10-17 16:45:51.689992] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:15.564 [2024-10-17 16:45:51.694449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 405.550 ms, result 0 00:31:15.564 [2024-10-17 16:45:51.695544] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:15.564 Some configs were skipped because the RPC state that can call them passed over. 00:31:15.564 16:45:51 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:31:15.822 [2024-10-17 16:45:51.942872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:15.822 [2024-10-17 16:45:51.943113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:31:15.822 [2024-10-17 16:45:51.943235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.552 ms 00:31:15.822 [2024-10-17 16:45:51.943289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:15.822 [2024-10-17 16:45:51.943370] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.051 ms, result 0 00:31:15.822 true 00:31:15.822 16:45:51 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:31:16.079 [2024-10-17 16:45:52.154400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:16.079 [2024-10-17 16:45:52.154453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:31:16.079 [2024-10-17 16:45:52.154475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:31:16.079 [2024-10-17 16:45:52.154486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:16.079 [2024-10-17 16:45:52.154536] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.376 ms, result 0 00:31:16.079 true 00:31:16.079 16:45:52 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 75717 00:31:16.079 16:45:52 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 75717 ']' 00:31:16.079 16:45:52 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 75717 00:31:16.079 16:45:52 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:31:16.079 16:45:52 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:16.079 16:45:52 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75717 00:31:16.079 killing process with pid 75717 00:31:16.079 16:45:52 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:16.079 16:45:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:16.079 16:45:52 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75717' 00:31:16.079 16:45:52 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 75717 00:31:16.079 16:45:52 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 75717 00:31:17.456 [2024-10-17 16:45:53.333216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.456 [2024-10-17 16:45:53.333282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:17.456 [2024-10-17 16:45:53.333297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:17.456 [2024-10-17 16:45:53.333309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.456 [2024-10-17 16:45:53.333333] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:31:17.456 [2024-10-17 16:45:53.337367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.456 [2024-10-17 16:45:53.337412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:17.456 [2024-10-17 16:45:53.337436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.019 ms 00:31:17.456 [2024-10-17 16:45:53.337446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.456 [2024-10-17 16:45:53.337718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.456 [2024-10-17 16:45:53.337733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:17.456 [2024-10-17 16:45:53.337746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:31:17.456 [2024-10-17 16:45:53.337755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.456 [2024-10-17 16:45:53.340926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.456 [2024-10-17 16:45:53.340961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:17.456 [2024-10-17 16:45:53.340976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.153 ms 00:31:17.456 [2024-10-17 16:45:53.340986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.456 [2024-10-17 16:45:53.346735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.456 [2024-10-17 16:45:53.346767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:17.456 [2024-10-17 16:45:53.346784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.716 ms 00:31:17.456 [2024-10-17 16:45:53.346794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.456 [2024-10-17 16:45:53.362005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.456 [2024-10-17 16:45:53.362040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:17.456 [2024-10-17 16:45:53.362059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.179 ms 00:31:17.456 [2024-10-17 16:45:53.362080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.456 [2024-10-17 16:45:53.372435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.456 [2024-10-17 16:45:53.372632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:17.456 [2024-10-17 16:45:53.372661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.299 ms 00:31:17.456 [2024-10-17 16:45:53.372676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.456 [2024-10-17 16:45:53.372890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.456 [2024-10-17 16:45:53.372908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:17.456 [2024-10-17 16:45:53.372924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:31:17.456 [2024-10-17 16:45:53.372935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.456 [2024-10-17 16:45:53.388242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.456 [2024-10-17 16:45:53.388277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:17.456 [2024-10-17 16:45:53.388300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.305 ms 00:31:17.456 [2024-10-17 16:45:53.388310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.456 [2024-10-17 16:45:53.402772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.456 [2024-10-17 16:45:53.402905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:17.456 [2024-10-17 16:45:53.402939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.427 ms 00:31:17.456 [2024-10-17 16:45:53.402950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.456 [2024-10-17 16:45:53.417898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.456 [2024-10-17 16:45:53.418031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:17.456 [2024-10-17 16:45:53.418060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.910 ms 00:31:17.456 [2024-10-17 16:45:53.418071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.456 [2024-10-17 16:45:53.432820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.456 [2024-10-17 16:45:53.432951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:17.456 [2024-10-17 16:45:53.432976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.691 ms 00:31:17.456 [2024-10-17 16:45:53.432985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.456 [2024-10-17 16:45:53.433038] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:17.456 [2024-10-17 16:45:53.433056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:17.456 [2024-10-17 16:45:53.433072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:17.456 [2024-10-17 16:45:53.433083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:17.456 [2024-10-17 16:45:53.433096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:17.456 [2024-10-17 16:45:53.433108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.433988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:17.457 [2024-10-17 16:45:53.434413] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:17.457 [2024-10-17 16:45:53.434437] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86c7f8b6-9b0b-494b-b85d-53cff9f6f843 00:31:17.457 [2024-10-17 16:45:53.434461] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:17.457 [2024-10-17 16:45:53.434484] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:17.457 [2024-10-17 16:45:53.434499] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:17.457 [2024-10-17 16:45:53.434515] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:17.457 [2024-10-17 16:45:53.434525] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:17.457 [2024-10-17 16:45:53.434540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:17.457 [2024-10-17 16:45:53.434550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:17.457 [2024-10-17 16:45:53.434564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:17.457 [2024-10-17 16:45:53.434573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:17.457 [2024-10-17 16:45:53.434588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.457 [2024-10-17 16:45:53.434598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:17.457 [2024-10-17 16:45:53.434614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.552 ms 00:31:17.457 [2024-10-17 16:45:53.434624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.457 [2024-10-17 16:45:53.454708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.457 [2024-10-17 16:45:53.454742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:17.457 [2024-10-17 16:45:53.454766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.081 ms 00:31:17.457 [2024-10-17 16:45:53.454778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.457 [2024-10-17 16:45:53.455325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.457 [2024-10-17 16:45:53.455348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:17.457 [2024-10-17 16:45:53.455365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:31:17.457 [2024-10-17 16:45:53.455375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.457 [2024-10-17 16:45:53.526589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.457 [2024-10-17 16:45:53.526640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:17.457 [2024-10-17 16:45:53.526660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.457 [2024-10-17 16:45:53.526671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.457 [2024-10-17 16:45:53.526822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.457 [2024-10-17 16:45:53.526838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:17.457 [2024-10-17 16:45:53.526854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.457 [2024-10-17 16:45:53.526865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.457 [2024-10-17 16:45:53.526933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.457 [2024-10-17 16:45:53.526946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:17.457 [2024-10-17 16:45:53.526967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.457 [2024-10-17 16:45:53.526977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.457 [2024-10-17 16:45:53.527003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.457 [2024-10-17 16:45:53.527014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:17.457 [2024-10-17 16:45:53.527028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.457 [2024-10-17 16:45:53.527038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.457 [2024-10-17 16:45:53.653229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.457 [2024-10-17 16:45:53.653308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:17.457 [2024-10-17 16:45:53.653330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.457 [2024-10-17 16:45:53.653340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.715 [2024-10-17 16:45:53.754581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.716 [2024-10-17 16:45:53.754649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:17.716 [2024-10-17 16:45:53.754669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.716 [2024-10-17 16:45:53.754680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.716 [2024-10-17 16:45:53.754818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.716 [2024-10-17 16:45:53.754838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:17.716 [2024-10-17 16:45:53.754859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.716 [2024-10-17 16:45:53.754870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.716 [2024-10-17 16:45:53.754904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.716 [2024-10-17 16:45:53.754915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:17.716 [2024-10-17 16:45:53.754931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.716 [2024-10-17 16:45:53.754942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.716 [2024-10-17 16:45:53.755066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.716 [2024-10-17 16:45:53.755080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:17.716 [2024-10-17 16:45:53.755101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.716 [2024-10-17 16:45:53.755111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.716 [2024-10-17 16:45:53.755157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.716 [2024-10-17 16:45:53.755170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:17.716 [2024-10-17 16:45:53.755185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.716 [2024-10-17 16:45:53.755196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.716 [2024-10-17 16:45:53.755241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.716 [2024-10-17 16:45:53.755253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:17.716 [2024-10-17 16:45:53.755278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.716 [2024-10-17 16:45:53.755288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.716 [2024-10-17 16:45:53.755339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:17.716 [2024-10-17 16:45:53.755351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:17.716 [2024-10-17 16:45:53.755366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:17.716 [2024-10-17 16:45:53.755376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.716 [2024-10-17 16:45:53.755525] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 422.963 ms, result 0 00:31:18.655 16:45:54 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:31:18.655 16:45:54 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:18.655 [2024-10-17 16:45:54.877337] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:31:18.655 [2024-10-17 16:45:54.877660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75786 ] 00:31:18.914 [2024-10-17 16:45:55.050540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.914 [2024-10-17 16:45:55.169253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.482 [2024-10-17 16:45:55.544121] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:19.482 [2024-10-17 16:45:55.544339] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:19.482 [2024-10-17 16:45:55.707116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.482 [2024-10-17 16:45:55.707373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:19.482 [2024-10-17 16:45:55.707399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:19.482 [2024-10-17 16:45:55.707411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.482 [2024-10-17 16:45:55.710939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.482 [2024-10-17 16:45:55.710983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:19.482 [2024-10-17 16:45:55.710995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.503 ms 00:31:19.482 [2024-10-17 16:45:55.711006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.482 [2024-10-17 16:45:55.711120] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:19.482 [2024-10-17 16:45:55.712088] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:19.482 [2024-10-17 16:45:55.712124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.482 [2024-10-17 16:45:55.712135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:19.482 [2024-10-17 16:45:55.712147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:31:19.482 [2024-10-17 16:45:55.712157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.482 [2024-10-17 16:45:55.713743] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:19.482 [2024-10-17 16:45:55.734167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.482 [2024-10-17 16:45:55.734235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:19.482 [2024-10-17 16:45:55.734267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.457 ms 00:31:19.482 [2024-10-17 16:45:55.734278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.482 [2024-10-17 16:45:55.734406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.482 [2024-10-17 16:45:55.734421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:19.482 [2024-10-17 16:45:55.734432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:19.482 [2024-10-17 16:45:55.734443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.482 [2024-10-17 16:45:55.741570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.482 [2024-10-17 16:45:55.741772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:19.482 [2024-10-17 16:45:55.741795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.091 ms 00:31:19.482 [2024-10-17 16:45:55.741806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.482 [2024-10-17 16:45:55.741929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.482 [2024-10-17 16:45:55.741943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:19.482 [2024-10-17 16:45:55.741955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:31:19.482 [2024-10-17 16:45:55.741965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.482 [2024-10-17 16:45:55.741998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.482 [2024-10-17 16:45:55.742009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:19.482 [2024-10-17 16:45:55.742020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:19.482 [2024-10-17 16:45:55.742035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.482 [2024-10-17 16:45:55.742062] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:31:19.482 [2024-10-17 16:45:55.747076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.483 [2024-10-17 16:45:55.747114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:19.483 [2024-10-17 16:45:55.747127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.030 ms 00:31:19.483 [2024-10-17 16:45:55.747138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.483 [2024-10-17 16:45:55.747220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.483 [2024-10-17 16:45:55.747232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:19.483 [2024-10-17 16:45:55.747243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:19.483 [2024-10-17 16:45:55.747254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.483 [2024-10-17 16:45:55.747277] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:19.483 [2024-10-17 16:45:55.747300] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:19.483 [2024-10-17 16:45:55.747340] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:19.483 [2024-10-17 16:45:55.747361] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:19.483 [2024-10-17 16:45:55.747452] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:19.483 [2024-10-17 16:45:55.747467] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:19.483 [2024-10-17 16:45:55.747480] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:19.483 [2024-10-17 16:45:55.747494] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:19.483 [2024-10-17 16:45:55.747506] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:19.483 [2024-10-17 16:45:55.747517] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:31:19.483 [2024-10-17 16:45:55.747531] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:19.483 [2024-10-17 16:45:55.747541] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:19.483 [2024-10-17 16:45:55.747551] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:19.483 [2024-10-17 16:45:55.747562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.483 [2024-10-17 16:45:55.747572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:19.483 [2024-10-17 16:45:55.747583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:31:19.483 [2024-10-17 16:45:55.747593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.483 [2024-10-17 16:45:55.747670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.483 [2024-10-17 16:45:55.747681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:19.483 [2024-10-17 16:45:55.747691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:19.483 [2024-10-17 16:45:55.747723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.483 [2024-10-17 16:45:55.747814] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:19.483 [2024-10-17 16:45:55.747827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:19.483 [2024-10-17 16:45:55.747838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:19.483 [2024-10-17 16:45:55.747848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.483 [2024-10-17 16:45:55.747858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:19.483 [2024-10-17 16:45:55.747868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:19.483 [2024-10-17 16:45:55.747878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:31:19.483 [2024-10-17 16:45:55.747889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:19.483 [2024-10-17 16:45:55.747899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:31:19.483 [2024-10-17 16:45:55.747908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:19.483 [2024-10-17 16:45:55.747917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:19.483 [2024-10-17 16:45:55.747928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:31:19.483 [2024-10-17 16:45:55.747937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:19.483 [2024-10-17 16:45:55.747960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:19.483 [2024-10-17 16:45:55.747971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:31:19.483 [2024-10-17 16:45:55.747981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.483 [2024-10-17 16:45:55.747991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:19.483 [2024-10-17 16:45:55.748000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:31:19.483 [2024-10-17 16:45:55.748009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.483 [2024-10-17 16:45:55.748018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:19.483 [2024-10-17 16:45:55.748028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:31:19.483 [2024-10-17 16:45:55.748037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.483 [2024-10-17 16:45:55.748046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:19.483 [2024-10-17 16:45:55.748055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:31:19.483 [2024-10-17 16:45:55.748064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.483 [2024-10-17 16:45:55.748073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:19.483 [2024-10-17 16:45:55.748082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:31:19.483 [2024-10-17 16:45:55.748091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.483 [2024-10-17 16:45:55.748100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:19.483 [2024-10-17 16:45:55.748110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:31:19.483 [2024-10-17 16:45:55.748119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:19.483 [2024-10-17 16:45:55.748128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:19.483 [2024-10-17 16:45:55.748137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:31:19.483 [2024-10-17 16:45:55.748145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:19.483 [2024-10-17 16:45:55.748154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:19.483 [2024-10-17 16:45:55.748163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:31:19.483 [2024-10-17 16:45:55.748173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:19.483 [2024-10-17 16:45:55.748182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:19.483 [2024-10-17 16:45:55.748191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:31:19.483 [2024-10-17 16:45:55.748199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.483 [2024-10-17 16:45:55.748208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:19.483 [2024-10-17 16:45:55.748217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:31:19.483 [2024-10-17 16:45:55.748226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.483 [2024-10-17 16:45:55.748235] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:19.483 [2024-10-17 16:45:55.748247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:19.483 [2024-10-17 16:45:55.748257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:19.483 [2024-10-17 16:45:55.748266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:19.483 [2024-10-17 16:45:55.748276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:19.483 [2024-10-17 16:45:55.748286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:19.483 [2024-10-17 16:45:55.748295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:19.483 [2024-10-17 16:45:55.748305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:19.483 [2024-10-17 16:45:55.748314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:19.483 [2024-10-17 16:45:55.748324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:19.483 [2024-10-17 16:45:55.748334] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:19.483 [2024-10-17 16:45:55.748350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:19.483 [2024-10-17 16:45:55.748362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:31:19.483 [2024-10-17 16:45:55.748382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:31:19.483 [2024-10-17 16:45:55.748393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:31:19.483 [2024-10-17 16:45:55.748403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:31:19.483 [2024-10-17 16:45:55.748414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:31:19.483 [2024-10-17 16:45:55.748424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:31:19.483 [2024-10-17 16:45:55.748434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:31:19.483 [2024-10-17 16:45:55.748445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:31:19.484 [2024-10-17 16:45:55.748456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:31:19.484 [2024-10-17 16:45:55.748466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:31:19.484 [2024-10-17 16:45:55.748477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:31:19.484 [2024-10-17 16:45:55.748503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:31:19.484 [2024-10-17 16:45:55.748514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:31:19.484 [2024-10-17 16:45:55.748525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:31:19.484 [2024-10-17 16:45:55.748536] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:19.484 [2024-10-17 16:45:55.748548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:19.484 [2024-10-17 16:45:55.748560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:19.484 [2024-10-17 16:45:55.748571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:19.484 [2024-10-17 16:45:55.748582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:19.484 [2024-10-17 16:45:55.748593] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:19.484 [2024-10-17 16:45:55.748606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.484 [2024-10-17 16:45:55.748617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:19.484 [2024-10-17 16:45:55.748628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:31:19.484 [2024-10-17 16:45:55.748642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.743 [2024-10-17 16:45:55.789478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.743 [2024-10-17 16:45:55.789716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:19.743 [2024-10-17 16:45:55.789742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.839 ms 00:31:19.743 [2024-10-17 16:45:55.789754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.743 [2024-10-17 16:45:55.789923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.743 [2024-10-17 16:45:55.789938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:19.743 [2024-10-17 16:45:55.789949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:19.743 [2024-10-17 16:45:55.789965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.743 [2024-10-17 16:45:55.852259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.743 [2024-10-17 16:45:55.852313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:19.743 [2024-10-17 16:45:55.852329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.366 ms 00:31:19.743 [2024-10-17 16:45:55.852340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.743 [2024-10-17 16:45:55.852494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.743 [2024-10-17 16:45:55.852508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:19.743 [2024-10-17 16:45:55.852520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:19.743 [2024-10-17 16:45:55.852530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.743 [2024-10-17 16:45:55.853007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.743 [2024-10-17 16:45:55.853022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:19.743 [2024-10-17 16:45:55.853033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:31:19.743 [2024-10-17 16:45:55.853043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.743 [2024-10-17 16:45:55.853173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.743 [2024-10-17 16:45:55.853190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:19.743 [2024-10-17 16:45:55.853200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:31:19.743 [2024-10-17 16:45:55.853210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.743 [2024-10-17 16:45:55.873994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.743 [2024-10-17 16:45:55.874048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:19.743 [2024-10-17 16:45:55.874066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.792 ms 00:31:19.743 [2024-10-17 16:45:55.874076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.743 [2024-10-17 16:45:55.893813] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:31:19.743 [2024-10-17 16:45:55.893866] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:19.743 [2024-10-17 16:45:55.893883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.743 [2024-10-17 16:45:55.893895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:19.743 [2024-10-17 16:45:55.893908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.653 ms 00:31:19.743 [2024-10-17 16:45:55.893918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.743 [2024-10-17 16:45:55.924494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.744 [2024-10-17 16:45:55.924695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:19.744 [2024-10-17 16:45:55.924730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.516 ms 00:31:19.744 [2024-10-17 16:45:55.924742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.744 [2024-10-17 16:45:55.944389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.744 [2024-10-17 16:45:55.944473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:19.744 [2024-10-17 16:45:55.944489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.517 ms 00:31:19.744 [2024-10-17 16:45:55.944500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.744 [2024-10-17 16:45:55.963589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.744 [2024-10-17 16:45:55.963640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:19.744 [2024-10-17 16:45:55.963654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.008 ms 00:31:19.744 [2024-10-17 16:45:55.963665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:19.744 [2024-10-17 16:45:55.964522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:19.744 [2024-10-17 16:45:55.964559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:19.744 [2024-10-17 16:45:55.964572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:31:19.744 [2024-10-17 16:45:55.964583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.002 [2024-10-17 16:45:56.054420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.003 [2024-10-17 16:45:56.054493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:20.003 [2024-10-17 16:45:56.054512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.951 ms 00:31:20.003 [2024-10-17 16:45:56.054523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.003 [2024-10-17 16:45:56.067478] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:31:20.003 [2024-10-17 16:45:56.084577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.003 [2024-10-17 16:45:56.084860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:20.003 [2024-10-17 16:45:56.084889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.932 ms 00:31:20.003 [2024-10-17 16:45:56.084903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.003 [2024-10-17 16:45:56.085077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.003 [2024-10-17 16:45:56.085091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:20.003 [2024-10-17 16:45:56.085103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:20.003 [2024-10-17 16:45:56.085113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.003 [2024-10-17 16:45:56.085172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.003 [2024-10-17 16:45:56.085184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:20.003 [2024-10-17 16:45:56.085196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:20.003 [2024-10-17 16:45:56.085205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.003 [2024-10-17 16:45:56.085234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.003 [2024-10-17 16:45:56.085248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:20.003 [2024-10-17 16:45:56.085259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:20.003 [2024-10-17 16:45:56.085269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.003 [2024-10-17 16:45:56.085307] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:20.003 [2024-10-17 16:45:56.085320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.003 [2024-10-17 16:45:56.085330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:20.003 [2024-10-17 16:45:56.085341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:20.003 [2024-10-17 16:45:56.085351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.003 [2024-10-17 16:45:56.123145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.003 [2024-10-17 16:45:56.123205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:20.003 [2024-10-17 16:45:56.123221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.831 ms 00:31:20.003 [2024-10-17 16:45:56.123233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.003 [2024-10-17 16:45:56.123379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:20.003 [2024-10-17 16:45:56.123393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:20.003 [2024-10-17 16:45:56.123405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:31:20.003 [2024-10-17 16:45:56.123416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:20.003 [2024-10-17 16:45:56.124411] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:20.003 [2024-10-17 16:45:56.129159] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 417.628 ms, result 0 00:31:20.003 [2024-10-17 16:45:56.130180] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:20.003 [2024-10-17 16:45:56.149735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:20.940  [2024-10-17T16:45:58.176Z] Copying: 32/256 [MB] (32 MBps) [2024-10-17T16:45:59.551Z] Copying: 61/256 [MB] (29 MBps) [2024-10-17T16:46:00.488Z] Copying: 88/256 [MB] (27 MBps) [2024-10-17T16:46:01.424Z] Copying: 116/256 [MB] (27 MBps) [2024-10-17T16:46:02.365Z] Copying: 143/256 [MB] (27 MBps) [2024-10-17T16:46:03.302Z] Copying: 171/256 [MB] (27 MBps) [2024-10-17T16:46:04.239Z] Copying: 201/256 [MB] (29 MBps) [2024-10-17T16:46:05.184Z] Copying: 229/256 [MB] (28 MBps) [2024-10-17T16:46:05.184Z] Copying: 256/256 [MB] (average 28 MBps)[2024-10-17 16:46:05.085622] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:28.885 [2024-10-17 16:46:05.100621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.885 [2024-10-17 16:46:05.100835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:28.886 [2024-10-17 16:46:05.100878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:28.886 [2024-10-17 16:46:05.100889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.886 [2024-10-17 16:46:05.100937] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:31:28.886 [2024-10-17 16:46:05.105249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.886 [2024-10-17 16:46:05.105280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:28.886 [2024-10-17 16:46:05.105293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.301 ms 00:31:28.886 [2024-10-17 16:46:05.105304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.886 [2024-10-17 16:46:05.105538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.886 [2024-10-17 16:46:05.105552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:28.886 [2024-10-17 16:46:05.105563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:31:28.886 [2024-10-17 16:46:05.105574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.886 [2024-10-17 16:46:05.108447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.886 [2024-10-17 16:46:05.108571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:28.886 [2024-10-17 16:46:05.108602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.861 ms 00:31:28.886 [2024-10-17 16:46:05.108613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.886 [2024-10-17 16:46:05.114243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.886 [2024-10-17 16:46:05.114374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:28.886 [2024-10-17 16:46:05.114506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.612 ms 00:31:28.886 [2024-10-17 16:46:05.114544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.886 [2024-10-17 16:46:05.153074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.886 [2024-10-17 16:46:05.153317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:28.886 [2024-10-17 16:46:05.153438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.441 ms 00:31:28.886 [2024-10-17 16:46:05.153476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.886 [2024-10-17 16:46:05.174781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.886 [2024-10-17 16:46:05.174938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:28.886 [2024-10-17 16:46:05.175060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.213 ms 00:31:28.886 [2024-10-17 16:46:05.175109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:28.886 [2024-10-17 16:46:05.175343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:28.886 [2024-10-17 16:46:05.175408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:28.886 [2024-10-17 16:46:05.175494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:31:28.886 [2024-10-17 16:46:05.175524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.192 [2024-10-17 16:46:05.211909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.192 [2024-10-17 16:46:05.212049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:29.192 [2024-10-17 16:46:05.212121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.380 ms 00:31:29.192 [2024-10-17 16:46:05.212156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.192 [2024-10-17 16:46:05.249084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.192 [2024-10-17 16:46:05.249238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:29.192 [2024-10-17 16:46:05.249314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.904 ms 00:31:29.192 [2024-10-17 16:46:05.249351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.192 [2024-10-17 16:46:05.286297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.192 [2024-10-17 16:46:05.286450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:29.192 [2024-10-17 16:46:05.286523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.922 ms 00:31:29.192 [2024-10-17 16:46:05.286559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.192 [2024-10-17 16:46:05.324048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.192 [2024-10-17 16:46:05.324212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:29.192 [2024-10-17 16:46:05.324327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.394 ms 00:31:29.192 [2024-10-17 16:46:05.324366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.192 [2024-10-17 16:46:05.324474] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:29.192 [2024-10-17 16:46:05.324532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:29.192 [2024-10-17 16:46:05.324583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:29.192 [2024-10-17 16:46:05.324692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:29.192 [2024-10-17 16:46:05.324760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:29.192 [2024-10-17 16:46:05.324810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:29.192 [2024-10-17 16:46:05.324859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:29.192 [2024-10-17 16:46:05.324951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:29.192 [2024-10-17 16:46:05.325004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.325991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:29.193 [2024-10-17 16:46:05.326641] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:29.193 [2024-10-17 16:46:05.326652] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86c7f8b6-9b0b-494b-b85d-53cff9f6f843 00:31:29.193 [2024-10-17 16:46:05.326663] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:29.193 [2024-10-17 16:46:05.326673] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:29.194 [2024-10-17 16:46:05.326683] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:29.194 [2024-10-17 16:46:05.326694] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:29.194 [2024-10-17 16:46:05.326714] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:29.194 [2024-10-17 16:46:05.326724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:29.194 [2024-10-17 16:46:05.326736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:29.194 [2024-10-17 16:46:05.326746] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:29.194 [2024-10-17 16:46:05.326755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:29.194 [2024-10-17 16:46:05.326766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.194 [2024-10-17 16:46:05.326776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:29.194 [2024-10-17 16:46:05.326795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.297 ms 00:31:29.194 [2024-10-17 16:46:05.326805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.194 [2024-10-17 16:46:05.347261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.194 [2024-10-17 16:46:05.347304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:29.194 [2024-10-17 16:46:05.347317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.459 ms 00:31:29.194 [2024-10-17 16:46:05.347328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.194 [2024-10-17 16:46:05.347924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:29.194 [2024-10-17 16:46:05.347953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:29.194 [2024-10-17 16:46:05.347965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:31:29.194 [2024-10-17 16:46:05.347975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.194 [2024-10-17 16:46:05.405609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.194 [2024-10-17 16:46:05.405664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:29.194 [2024-10-17 16:46:05.405679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.194 [2024-10-17 16:46:05.405690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.194 [2024-10-17 16:46:05.405819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.194 [2024-10-17 16:46:05.405835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:29.194 [2024-10-17 16:46:05.405847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.194 [2024-10-17 16:46:05.405857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.194 [2024-10-17 16:46:05.405911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.194 [2024-10-17 16:46:05.405924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:29.194 [2024-10-17 16:46:05.405935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.194 [2024-10-17 16:46:05.405946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.194 [2024-10-17 16:46:05.405966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.194 [2024-10-17 16:46:05.405977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:29.194 [2024-10-17 16:46:05.405991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.194 [2024-10-17 16:46:05.406001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.452 [2024-10-17 16:46:05.532456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.452 [2024-10-17 16:46:05.532528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:29.452 [2024-10-17 16:46:05.532544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.452 [2024-10-17 16:46:05.532556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.452 [2024-10-17 16:46:05.635054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.452 [2024-10-17 16:46:05.635129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:29.452 [2024-10-17 16:46:05.635145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.452 [2024-10-17 16:46:05.635156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.452 [2024-10-17 16:46:05.635253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.452 [2024-10-17 16:46:05.635265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:29.452 [2024-10-17 16:46:05.635276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.452 [2024-10-17 16:46:05.635287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.452 [2024-10-17 16:46:05.635316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.452 [2024-10-17 16:46:05.635327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:29.452 [2024-10-17 16:46:05.635337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.452 [2024-10-17 16:46:05.635352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.452 [2024-10-17 16:46:05.635467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.452 [2024-10-17 16:46:05.635480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:29.452 [2024-10-17 16:46:05.635491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.452 [2024-10-17 16:46:05.635501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.452 [2024-10-17 16:46:05.635538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.452 [2024-10-17 16:46:05.635550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:29.452 [2024-10-17 16:46:05.635561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.452 [2024-10-17 16:46:05.635571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.452 [2024-10-17 16:46:05.635615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.452 [2024-10-17 16:46:05.635626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:29.452 [2024-10-17 16:46:05.635637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.452 [2024-10-17 16:46:05.635646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.452 [2024-10-17 16:46:05.635689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:29.452 [2024-10-17 16:46:05.635723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:29.452 [2024-10-17 16:46:05.635735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:29.452 [2024-10-17 16:46:05.635749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:29.452 [2024-10-17 16:46:05.635889] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.131 ms, result 0 00:31:30.388 00:31:30.388 00:31:30.646 16:46:06 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:31:30.646 16:46:06 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:31:30.904 16:46:07 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:31.162 [2024-10-17 16:46:07.244273] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:31:31.162 [2024-10-17 16:46:07.244411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75913 ] 00:31:31.162 [2024-10-17 16:46:07.415917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.420 [2024-10-17 16:46:07.535884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.678 [2024-10-17 16:46:07.901199] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:31.678 [2024-10-17 16:46:07.901267] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:31.939 [2024-10-17 16:46:08.064211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.939 [2024-10-17 16:46:08.064280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:31.939 [2024-10-17 16:46:08.064297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:31.939 [2024-10-17 16:46:08.064308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.939 [2024-10-17 16:46:08.067580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.939 [2024-10-17 16:46:08.067752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:31.939 [2024-10-17 16:46:08.067776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.256 ms 00:31:31.939 [2024-10-17 16:46:08.067786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.939 [2024-10-17 16:46:08.067973] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:31.939 [2024-10-17 16:46:08.068965] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:31.939 [2024-10-17 16:46:08.069000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.940 [2024-10-17 16:46:08.069011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:31.940 [2024-10-17 16:46:08.069023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:31:31.940 [2024-10-17 16:46:08.069033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.940 [2024-10-17 16:46:08.070514] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:31.940 [2024-10-17 16:46:08.090391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.940 [2024-10-17 16:46:08.090547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:31.940 [2024-10-17 16:46:08.090577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.910 ms 00:31:31.940 [2024-10-17 16:46:08.090588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.940 [2024-10-17 16:46:08.090690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.940 [2024-10-17 16:46:08.090726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:31.940 [2024-10-17 16:46:08.090738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:31.940 [2024-10-17 16:46:08.090749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.940 [2024-10-17 16:46:08.097442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.940 [2024-10-17 16:46:08.097596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:31.940 [2024-10-17 16:46:08.097617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.659 ms 00:31:31.940 [2024-10-17 16:46:08.097628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.940 [2024-10-17 16:46:08.097752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.940 [2024-10-17 16:46:08.097767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:31.940 [2024-10-17 16:46:08.097778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:31:31.940 [2024-10-17 16:46:08.097789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.940 [2024-10-17 16:46:08.097822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.940 [2024-10-17 16:46:08.097834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:31.940 [2024-10-17 16:46:08.097845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:31.940 [2024-10-17 16:46:08.097859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.940 [2024-10-17 16:46:08.097885] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:31:31.940 [2024-10-17 16:46:08.102819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.940 [2024-10-17 16:46:08.102852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:31.940 [2024-10-17 16:46:08.102865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.950 ms 00:31:31.940 [2024-10-17 16:46:08.102875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.940 [2024-10-17 16:46:08.102947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.940 [2024-10-17 16:46:08.102960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:31.940 [2024-10-17 16:46:08.102971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:31.940 [2024-10-17 16:46:08.102982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.940 [2024-10-17 16:46:08.103005] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:31.940 [2024-10-17 16:46:08.103029] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:31.940 [2024-10-17 16:46:08.103068] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:31.940 [2024-10-17 16:46:08.103086] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:31.940 [2024-10-17 16:46:08.103176] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:31.940 [2024-10-17 16:46:08.103189] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:31.940 [2024-10-17 16:46:08.103204] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:31.940 [2024-10-17 16:46:08.103216] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:31.940 [2024-10-17 16:46:08.103228] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:31.940 [2024-10-17 16:46:08.103240] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:31:31.940 [2024-10-17 16:46:08.103254] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:31.940 [2024-10-17 16:46:08.103264] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:31.940 [2024-10-17 16:46:08.103274] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:31.940 [2024-10-17 16:46:08.103285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.940 [2024-10-17 16:46:08.103295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:31.940 [2024-10-17 16:46:08.103306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:31:31.940 [2024-10-17 16:46:08.103316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.940 [2024-10-17 16:46:08.103393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.940 [2024-10-17 16:46:08.103404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:31.940 [2024-10-17 16:46:08.103414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:31.940 [2024-10-17 16:46:08.103428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.940 [2024-10-17 16:46:08.103518] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:31.940 [2024-10-17 16:46:08.103530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:31.940 [2024-10-17 16:46:08.103541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:31.940 [2024-10-17 16:46:08.103551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:31.940 [2024-10-17 16:46:08.103562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:31.940 [2024-10-17 16:46:08.103571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:31.940 [2024-10-17 16:46:08.103580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:31:31.940 [2024-10-17 16:46:08.103591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:31.940 [2024-10-17 16:46:08.103600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:31:31.940 [2024-10-17 16:46:08.103610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:31.940 [2024-10-17 16:46:08.103620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:31.940 [2024-10-17 16:46:08.103631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:31:31.940 [2024-10-17 16:46:08.103640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:31.940 [2024-10-17 16:46:08.103661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:31.940 [2024-10-17 16:46:08.103671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:31:31.940 [2024-10-17 16:46:08.103680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:31.940 [2024-10-17 16:46:08.103690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:31.940 [2024-10-17 16:46:08.103717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:31:31.940 [2024-10-17 16:46:08.103727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:31.940 [2024-10-17 16:46:08.103737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:31.940 [2024-10-17 16:46:08.103761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:31:31.940 [2024-10-17 16:46:08.103771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:31.940 [2024-10-17 16:46:08.103781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:31.940 [2024-10-17 16:46:08.103790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:31:31.940 [2024-10-17 16:46:08.103799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:31.940 [2024-10-17 16:46:08.103809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:31.940 [2024-10-17 16:46:08.103819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:31:31.940 [2024-10-17 16:46:08.103828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:31.941 [2024-10-17 16:46:08.103837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:31.941 [2024-10-17 16:46:08.103847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:31:31.941 [2024-10-17 16:46:08.103856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:31.941 [2024-10-17 16:46:08.103865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:31.941 [2024-10-17 16:46:08.103875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:31:31.941 [2024-10-17 16:46:08.103884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:31.941 [2024-10-17 16:46:08.103893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:31.941 [2024-10-17 16:46:08.103902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:31:31.941 [2024-10-17 16:46:08.103911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:31.941 [2024-10-17 16:46:08.103921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:31.941 [2024-10-17 16:46:08.103930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:31:31.941 [2024-10-17 16:46:08.103939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:31.941 [2024-10-17 16:46:08.103948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:31.941 [2024-10-17 16:46:08.103957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:31:31.941 [2024-10-17 16:46:08.103968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:31.941 [2024-10-17 16:46:08.103977] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:31.941 [2024-10-17 16:46:08.103988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:31.941 [2024-10-17 16:46:08.103998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:31.941 [2024-10-17 16:46:08.104008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:31.941 [2024-10-17 16:46:08.104018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:31.941 [2024-10-17 16:46:08.104027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:31.941 [2024-10-17 16:46:08.104037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:31.941 [2024-10-17 16:46:08.104047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:31.941 [2024-10-17 16:46:08.104056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:31.941 [2024-10-17 16:46:08.104065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:31.941 [2024-10-17 16:46:08.104077] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:31.941 [2024-10-17 16:46:08.104093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:31.941 [2024-10-17 16:46:08.104106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:31:31.941 [2024-10-17 16:46:08.104116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:31:31.941 [2024-10-17 16:46:08.104127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:31:31.941 [2024-10-17 16:46:08.104138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:31:31.941 [2024-10-17 16:46:08.104148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:31:31.941 [2024-10-17 16:46:08.104159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:31:31.941 [2024-10-17 16:46:08.104169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:31:31.941 [2024-10-17 16:46:08.104180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:31:31.941 [2024-10-17 16:46:08.104190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:31:31.941 [2024-10-17 16:46:08.104200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:31:31.941 [2024-10-17 16:46:08.104211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:31:31.941 [2024-10-17 16:46:08.104221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:31:31.941 [2024-10-17 16:46:08.104231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:31:31.941 [2024-10-17 16:46:08.104242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:31:31.941 [2024-10-17 16:46:08.104253] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:31.941 [2024-10-17 16:46:08.104264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:31.941 [2024-10-17 16:46:08.104276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:31.941 [2024-10-17 16:46:08.104287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:31.941 [2024-10-17 16:46:08.104297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:31.941 [2024-10-17 16:46:08.104308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:31.941 [2024-10-17 16:46:08.104319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.941 [2024-10-17 16:46:08.104329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:31.941 [2024-10-17 16:46:08.104340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:31:31.941 [2024-10-17 16:46:08.104354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.941 [2024-10-17 16:46:08.145554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.941 [2024-10-17 16:46:08.145606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:31.941 [2024-10-17 16:46:08.145622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.201 ms 00:31:31.941 [2024-10-17 16:46:08.145634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.941 [2024-10-17 16:46:08.145807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.941 [2024-10-17 16:46:08.145822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:31.941 [2024-10-17 16:46:08.145834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:31.941 [2024-10-17 16:46:08.145850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.941 [2024-10-17 16:46:08.207452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.941 [2024-10-17 16:46:08.207503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:31.941 [2024-10-17 16:46:08.207519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.673 ms 00:31:31.941 [2024-10-17 16:46:08.207531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.941 [2024-10-17 16:46:08.207677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.941 [2024-10-17 16:46:08.207691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:31.941 [2024-10-17 16:46:08.207721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:31.941 [2024-10-17 16:46:08.207732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.941 [2024-10-17 16:46:08.208171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.941 [2024-10-17 16:46:08.208189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:31.941 [2024-10-17 16:46:08.208201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:31:31.941 [2024-10-17 16:46:08.208212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.941 [2024-10-17 16:46:08.208338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.941 [2024-10-17 16:46:08.208356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:31.941 [2024-10-17 16:46:08.208367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:31:31.941 [2024-10-17 16:46:08.208386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:31.941 [2024-10-17 16:46:08.228665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:31.941 [2024-10-17 16:46:08.228992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:31.942 [2024-10-17 16:46:08.229023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.285 ms 00:31:31.942 [2024-10-17 16:46:08.229036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.223 [2024-10-17 16:46:08.249146] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:31:32.223 [2024-10-17 16:46:08.249201] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:32.223 [2024-10-17 16:46:08.249220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.223 [2024-10-17 16:46:08.249232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:32.223 [2024-10-17 16:46:08.249246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.040 ms 00:31:32.223 [2024-10-17 16:46:08.249256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.223 [2024-10-17 16:46:08.279161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.223 [2024-10-17 16:46:08.279384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:32.223 [2024-10-17 16:46:08.279410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.834 ms 00:31:32.223 [2024-10-17 16:46:08.279421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.223 [2024-10-17 16:46:08.298236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.224 [2024-10-17 16:46:08.298403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:32.224 [2024-10-17 16:46:08.298426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.729 ms 00:31:32.224 [2024-10-17 16:46:08.298438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.224 [2024-10-17 16:46:08.317219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.224 [2024-10-17 16:46:08.317389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:32.224 [2024-10-17 16:46:08.317412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.660 ms 00:31:32.224 [2024-10-17 16:46:08.317423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.224 [2024-10-17 16:46:08.318247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.224 [2024-10-17 16:46:08.318284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:32.224 [2024-10-17 16:46:08.318298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:31:32.224 [2024-10-17 16:46:08.318308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.224 [2024-10-17 16:46:08.406035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.224 [2024-10-17 16:46:08.406108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:32.224 [2024-10-17 16:46:08.406126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.835 ms 00:31:32.224 [2024-10-17 16:46:08.406138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.224 [2024-10-17 16:46:08.418209] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:31:32.224 [2024-10-17 16:46:08.434667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.224 [2024-10-17 16:46:08.434738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:32.224 [2024-10-17 16:46:08.434756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.423 ms 00:31:32.224 [2024-10-17 16:46:08.434768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.224 [2024-10-17 16:46:08.434917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.224 [2024-10-17 16:46:08.434934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:32.224 [2024-10-17 16:46:08.434945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:32.224 [2024-10-17 16:46:08.434956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.224 [2024-10-17 16:46:08.435015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.224 [2024-10-17 16:46:08.435027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:32.224 [2024-10-17 16:46:08.435038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:32.224 [2024-10-17 16:46:08.435049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.224 [2024-10-17 16:46:08.435072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.224 [2024-10-17 16:46:08.435088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:32.224 [2024-10-17 16:46:08.435101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:32.224 [2024-10-17 16:46:08.435111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.224 [2024-10-17 16:46:08.435150] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:32.224 [2024-10-17 16:46:08.435167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.224 [2024-10-17 16:46:08.435177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:32.224 [2024-10-17 16:46:08.435188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:32.224 [2024-10-17 16:46:08.435198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.224 [2024-10-17 16:46:08.472194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.224 [2024-10-17 16:46:08.472256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:32.224 [2024-10-17 16:46:08.472272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.032 ms 00:31:32.224 [2024-10-17 16:46:08.472284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.224 [2024-10-17 16:46:08.472430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.224 [2024-10-17 16:46:08.472443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:32.224 [2024-10-17 16:46:08.472455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:32.224 [2024-10-17 16:46:08.472465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.224 [2024-10-17 16:46:08.473460] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:32.224 [2024-10-17 16:46:08.478111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 409.573 ms, result 0 00:31:32.224 [2024-10-17 16:46:08.479054] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:32.224 [2024-10-17 16:46:08.498050] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:32.483  [2024-10-17T16:46:08.782Z] Copying: 4096/4096 [kB] (average 26 MBps)[2024-10-17 16:46:08.653669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:32.483 [2024-10-17 16:46:08.668595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.483 [2024-10-17 16:46:08.668775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:32.483 [2024-10-17 16:46:08.668803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:32.483 [2024-10-17 16:46:08.668814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.483 [2024-10-17 16:46:08.668854] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:31:32.483 [2024-10-17 16:46:08.672991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.483 [2024-10-17 16:46:08.673031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:32.483 [2024-10-17 16:46:08.673044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.126 ms 00:31:32.483 [2024-10-17 16:46:08.673054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.483 [2024-10-17 16:46:08.675274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.483 [2024-10-17 16:46:08.675314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:32.483 [2024-10-17 16:46:08.675328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.193 ms 00:31:32.483 [2024-10-17 16:46:08.675339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.483 [2024-10-17 16:46:08.678834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.483 [2024-10-17 16:46:08.678867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:32.483 [2024-10-17 16:46:08.678880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.480 ms 00:31:32.483 [2024-10-17 16:46:08.678897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.483 [2024-10-17 16:46:08.684585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.483 [2024-10-17 16:46:08.684734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:32.483 [2024-10-17 16:46:08.684756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.663 ms 00:31:32.483 [2024-10-17 16:46:08.684767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.483 [2024-10-17 16:46:08.722805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.483 [2024-10-17 16:46:08.722868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:32.483 [2024-10-17 16:46:08.722886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.035 ms 00:31:32.483 [2024-10-17 16:46:08.722897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.483 [2024-10-17 16:46:08.746009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.483 [2024-10-17 16:46:08.746078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:32.483 [2024-10-17 16:46:08.746096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.057 ms 00:31:32.483 [2024-10-17 16:46:08.746114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.483 [2024-10-17 16:46:08.746320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.483 [2024-10-17 16:46:08.746335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:32.483 [2024-10-17 16:46:08.746347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:31:32.483 [2024-10-17 16:46:08.746358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.743 [2024-10-17 16:46:08.784972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.743 [2024-10-17 16:46:08.785035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:32.743 [2024-10-17 16:46:08.785052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.640 ms 00:31:32.743 [2024-10-17 16:46:08.785075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.743 [2024-10-17 16:46:08.823437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.743 [2024-10-17 16:46:08.823494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:32.743 [2024-10-17 16:46:08.823511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.331 ms 00:31:32.743 [2024-10-17 16:46:08.823523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.743 [2024-10-17 16:46:08.860724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.743 [2024-10-17 16:46:08.860782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:32.743 [2024-10-17 16:46:08.860798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.174 ms 00:31:32.743 [2024-10-17 16:46:08.860810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.743 [2024-10-17 16:46:08.897476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.743 [2024-10-17 16:46:08.897531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:32.743 [2024-10-17 16:46:08.897548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.615 ms 00:31:32.743 [2024-10-17 16:46:08.897559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.743 [2024-10-17 16:46:08.897630] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:32.743 [2024-10-17 16:46:08.897648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.897999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:32.743 [2024-10-17 16:46:08.898276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:32.744 [2024-10-17 16:46:08.898781] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:32.744 [2024-10-17 16:46:08.898790] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86c7f8b6-9b0b-494b-b85d-53cff9f6f843 00:31:32.744 [2024-10-17 16:46:08.898801] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:32.744 [2024-10-17 16:46:08.898811] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:32.744 [2024-10-17 16:46:08.898821] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:32.744 [2024-10-17 16:46:08.898832] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:32.744 [2024-10-17 16:46:08.898842] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:32.744 [2024-10-17 16:46:08.898852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:32.744 [2024-10-17 16:46:08.898862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:32.744 [2024-10-17 16:46:08.898871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:32.744 [2024-10-17 16:46:08.898880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:32.744 [2024-10-17 16:46:08.898891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.744 [2024-10-17 16:46:08.898901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:32.744 [2024-10-17 16:46:08.898912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:31:32.744 [2024-10-17 16:46:08.898926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.744 [2024-10-17 16:46:08.919507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.744 [2024-10-17 16:46:08.919732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:32.744 [2024-10-17 16:46:08.919758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.589 ms 00:31:32.744 [2024-10-17 16:46:08.919768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.744 [2024-10-17 16:46:08.920391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.744 [2024-10-17 16:46:08.920414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:32.744 [2024-10-17 16:46:08.920426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:31:32.744 [2024-10-17 16:46:08.920436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.744 [2024-10-17 16:46:08.976634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.744 [2024-10-17 16:46:08.976695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:32.744 [2024-10-17 16:46:08.976724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.744 [2024-10-17 16:46:08.976735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.744 [2024-10-17 16:46:08.976872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.744 [2024-10-17 16:46:08.976890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:32.744 [2024-10-17 16:46:08.976901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.744 [2024-10-17 16:46:08.976912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.744 [2024-10-17 16:46:08.976973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.744 [2024-10-17 16:46:08.976987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:32.744 [2024-10-17 16:46:08.976998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.744 [2024-10-17 16:46:08.977009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.744 [2024-10-17 16:46:08.977029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:32.744 [2024-10-17 16:46:08.977040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:32.744 [2024-10-17 16:46:08.977051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:32.744 [2024-10-17 16:46:08.977065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.003 [2024-10-17 16:46:09.103131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.003 [2024-10-17 16:46:09.103201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:33.003 [2024-10-17 16:46:09.103218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.003 [2024-10-17 16:46:09.103229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.003 [2024-10-17 16:46:09.206596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.003 [2024-10-17 16:46:09.206881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:33.003 [2024-10-17 16:46:09.206918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.003 [2024-10-17 16:46:09.206929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.003 [2024-10-17 16:46:09.207036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.003 [2024-10-17 16:46:09.207049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:33.003 [2024-10-17 16:46:09.207060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.003 [2024-10-17 16:46:09.207070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.003 [2024-10-17 16:46:09.207101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.003 [2024-10-17 16:46:09.207113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:33.003 [2024-10-17 16:46:09.207123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.003 [2024-10-17 16:46:09.207134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.003 [2024-10-17 16:46:09.207251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.003 [2024-10-17 16:46:09.207264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:33.003 [2024-10-17 16:46:09.207275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.003 [2024-10-17 16:46:09.207285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.003 [2024-10-17 16:46:09.207329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.003 [2024-10-17 16:46:09.207343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:33.003 [2024-10-17 16:46:09.207353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.003 [2024-10-17 16:46:09.207363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.003 [2024-10-17 16:46:09.207408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.003 [2024-10-17 16:46:09.207419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:33.003 [2024-10-17 16:46:09.207429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.003 [2024-10-17 16:46:09.207440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.003 [2024-10-17 16:46:09.207486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:33.003 [2024-10-17 16:46:09.207498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:33.003 [2024-10-17 16:46:09.207509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:33.003 [2024-10-17 16:46:09.207519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:33.003 [2024-10-17 16:46:09.207664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 539.947 ms, result 0 00:31:34.378 00:31:34.378 00:31:34.378 16:46:10 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=75949 00:31:34.378 16:46:10 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:31:34.378 16:46:10 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 75949 00:31:34.378 16:46:10 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 75949 ']' 00:31:34.378 16:46:10 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.378 16:46:10 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:34.378 16:46:10 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.378 16:46:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:34.378 16:46:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:31:34.378 [2024-10-17 16:46:10.389314] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:31:34.378 [2024-10-17 16:46:10.390513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75949 ] 00:31:34.378 [2024-10-17 16:46:10.564581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.637 [2024-10-17 16:46:10.690109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:35.573 16:46:11 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:35.573 16:46:11 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:31:35.573 16:46:11 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:31:35.573 [2024-10-17 16:46:11.824208] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:35.573 [2024-10-17 16:46:11.824289] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:35.842 [2024-10-17 16:46:11.995688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.842 [2024-10-17 16:46:11.995766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:35.842 [2024-10-17 16:46:11.995786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:35.842 [2024-10-17 16:46:11.995798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.842 [2024-10-17 16:46:11.999278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.842 [2024-10-17 16:46:11.999326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:35.842 [2024-10-17 16:46:11.999342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.460 ms 00:31:35.842 [2024-10-17 16:46:11.999353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.842 [2024-10-17 16:46:11.999480] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:35.842 [2024-10-17 16:46:12.000641] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:35.842 [2024-10-17 16:46:12.000688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.842 [2024-10-17 16:46:12.000724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:35.842 [2024-10-17 16:46:12.000747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.223 ms 00:31:35.842 [2024-10-17 16:46:12.000776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.842 [2024-10-17 16:46:12.002519] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:35.842 [2024-10-17 16:46:12.022432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.842 [2024-10-17 16:46:12.022496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:35.842 [2024-10-17 16:46:12.022512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.953 ms 00:31:35.842 [2024-10-17 16:46:12.022529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.842 [2024-10-17 16:46:12.022648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.842 [2024-10-17 16:46:12.022669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:35.842 [2024-10-17 16:46:12.022681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:35.842 [2024-10-17 16:46:12.022695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.842 [2024-10-17 16:46:12.030041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.842 [2024-10-17 16:46:12.030083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:35.842 [2024-10-17 16:46:12.030096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.287 ms 00:31:35.842 [2024-10-17 16:46:12.030108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.842 [2024-10-17 16:46:12.030231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.842 [2024-10-17 16:46:12.030248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:35.842 [2024-10-17 16:46:12.030260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:31:35.842 [2024-10-17 16:46:12.030273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.842 [2024-10-17 16:46:12.030305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.842 [2024-10-17 16:46:12.030323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:35.842 [2024-10-17 16:46:12.030334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:35.842 [2024-10-17 16:46:12.030354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.842 [2024-10-17 16:46:12.030384] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:31:35.842 [2024-10-17 16:46:12.035559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.842 [2024-10-17 16:46:12.035710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:35.842 [2024-10-17 16:46:12.035826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.188 ms 00:31:35.842 [2024-10-17 16:46:12.035877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.842 [2024-10-17 16:46:12.036038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.842 [2024-10-17 16:46:12.036083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:35.842 [2024-10-17 16:46:12.036118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:35.842 [2024-10-17 16:46:12.036207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.842 [2024-10-17 16:46:12.036268] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:35.842 [2024-10-17 16:46:12.036315] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:35.842 [2024-10-17 16:46:12.036510] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:35.842 [2024-10-17 16:46:12.036538] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:35.842 [2024-10-17 16:46:12.036635] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:35.842 [2024-10-17 16:46:12.036649] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:35.842 [2024-10-17 16:46:12.036666] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:35.842 [2024-10-17 16:46:12.036680] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:35.842 [2024-10-17 16:46:12.036714] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:35.842 [2024-10-17 16:46:12.036727] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:31:35.842 [2024-10-17 16:46:12.036740] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:35.842 [2024-10-17 16:46:12.036751] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:35.842 [2024-10-17 16:46:12.036767] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:35.842 [2024-10-17 16:46:12.036779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.842 [2024-10-17 16:46:12.036792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:35.842 [2024-10-17 16:46:12.036803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:31:35.842 [2024-10-17 16:46:12.036816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.842 [2024-10-17 16:46:12.036898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.842 [2024-10-17 16:46:12.036913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:35.842 [2024-10-17 16:46:12.036926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:35.842 [2024-10-17 16:46:12.036939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.842 [2024-10-17 16:46:12.037029] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:35.842 [2024-10-17 16:46:12.037044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:35.842 [2024-10-17 16:46:12.037055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:35.842 [2024-10-17 16:46:12.037068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:35.842 [2024-10-17 16:46:12.037079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:35.842 [2024-10-17 16:46:12.037090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:35.842 [2024-10-17 16:46:12.037100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:31:35.842 [2024-10-17 16:46:12.037126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:35.842 [2024-10-17 16:46:12.037136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:31:35.842 [2024-10-17 16:46:12.037151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:35.842 [2024-10-17 16:46:12.037160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:35.842 [2024-10-17 16:46:12.037174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:31:35.842 [2024-10-17 16:46:12.037184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:35.842 [2024-10-17 16:46:12.037199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:35.842 [2024-10-17 16:46:12.037209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:31:35.842 [2024-10-17 16:46:12.037223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:35.842 [2024-10-17 16:46:12.037233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:35.842 [2024-10-17 16:46:12.037248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:31:35.842 [2024-10-17 16:46:12.037257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:35.842 [2024-10-17 16:46:12.037272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:35.842 [2024-10-17 16:46:12.037293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:31:35.843 [2024-10-17 16:46:12.037311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:35.843 [2024-10-17 16:46:12.037321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:35.843 [2024-10-17 16:46:12.037340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:31:35.843 [2024-10-17 16:46:12.037350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:35.843 [2024-10-17 16:46:12.037380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:35.843 [2024-10-17 16:46:12.037391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:31:35.843 [2024-10-17 16:46:12.037405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:35.843 [2024-10-17 16:46:12.037416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:35.843 [2024-10-17 16:46:12.037431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:31:35.843 [2024-10-17 16:46:12.037441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:35.843 [2024-10-17 16:46:12.037458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:35.843 [2024-10-17 16:46:12.037468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:31:35.843 [2024-10-17 16:46:12.037482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:35.843 [2024-10-17 16:46:12.037492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:35.843 [2024-10-17 16:46:12.037519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:31:35.843 [2024-10-17 16:46:12.037528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:35.843 [2024-10-17 16:46:12.037542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:35.843 [2024-10-17 16:46:12.037552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:31:35.843 [2024-10-17 16:46:12.037570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:35.843 [2024-10-17 16:46:12.037580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:35.843 [2024-10-17 16:46:12.037594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:31:35.843 [2024-10-17 16:46:12.037604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:35.843 [2024-10-17 16:46:12.037617] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:35.843 [2024-10-17 16:46:12.037628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:35.843 [2024-10-17 16:46:12.037643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:35.843 [2024-10-17 16:46:12.037658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:35.843 [2024-10-17 16:46:12.037673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:35.843 [2024-10-17 16:46:12.037683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:35.843 [2024-10-17 16:46:12.037708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:35.843 [2024-10-17 16:46:12.037719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:35.843 [2024-10-17 16:46:12.037732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:35.843 [2024-10-17 16:46:12.037742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:35.843 [2024-10-17 16:46:12.037757] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:35.843 [2024-10-17 16:46:12.037771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:35.843 [2024-10-17 16:46:12.037794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:31:35.843 [2024-10-17 16:46:12.037805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:31:35.843 [2024-10-17 16:46:12.037821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:31:35.843 [2024-10-17 16:46:12.037831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:31:35.843 [2024-10-17 16:46:12.037848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:31:35.843 [2024-10-17 16:46:12.037858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:31:35.843 [2024-10-17 16:46:12.037873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:31:35.843 [2024-10-17 16:46:12.037884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:31:35.843 [2024-10-17 16:46:12.037899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:31:35.843 [2024-10-17 16:46:12.037910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:31:35.843 [2024-10-17 16:46:12.037925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:31:35.843 [2024-10-17 16:46:12.037936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:31:35.843 [2024-10-17 16:46:12.037951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:31:35.843 [2024-10-17 16:46:12.037977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:31:35.843 [2024-10-17 16:46:12.037994] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:35.843 [2024-10-17 16:46:12.038007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:35.843 [2024-10-17 16:46:12.038028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:35.843 [2024-10-17 16:46:12.038039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:35.843 [2024-10-17 16:46:12.038053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:35.843 [2024-10-17 16:46:12.038064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:35.843 [2024-10-17 16:46:12.038078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.843 [2024-10-17 16:46:12.038089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:35.843 [2024-10-17 16:46:12.038103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.102 ms 00:31:35.843 [2024-10-17 16:46:12.038114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.843 [2024-10-17 16:46:12.080332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.843 [2024-10-17 16:46:12.080404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:35.843 [2024-10-17 16:46:12.080424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.214 ms 00:31:35.843 [2024-10-17 16:46:12.080451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:35.843 [2024-10-17 16:46:12.080632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:35.843 [2024-10-17 16:46:12.080649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:35.843 [2024-10-17 16:46:12.080663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:35.843 [2024-10-17 16:46:12.080675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.123 [2024-10-17 16:46:12.133276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.123 [2024-10-17 16:46:12.133348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:36.123 [2024-10-17 16:46:12.133374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.648 ms 00:31:36.123 [2024-10-17 16:46:12.133390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.123 [2024-10-17 16:46:12.133543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.123 [2024-10-17 16:46:12.133576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:36.123 [2024-10-17 16:46:12.133593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:36.123 [2024-10-17 16:46:12.133604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.123 [2024-10-17 16:46:12.134139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.123 [2024-10-17 16:46:12.134157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:36.123 [2024-10-17 16:46:12.134173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:31:36.123 [2024-10-17 16:46:12.134184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.123 [2024-10-17 16:46:12.134330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.123 [2024-10-17 16:46:12.134344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:36.123 [2024-10-17 16:46:12.134360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:31:36.123 [2024-10-17 16:46:12.134371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.123 [2024-10-17 16:46:12.157991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.123 [2024-10-17 16:46:12.158065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:36.123 [2024-10-17 16:46:12.158086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.623 ms 00:31:36.123 [2024-10-17 16:46:12.158098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.123 [2024-10-17 16:46:12.179195] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:36.123 [2024-10-17 16:46:12.179403] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:36.123 [2024-10-17 16:46:12.179437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.123 [2024-10-17 16:46:12.179449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:36.123 [2024-10-17 16:46:12.179468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.204 ms 00:31:36.123 [2024-10-17 16:46:12.179479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.123 [2024-10-17 16:46:12.211359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.123 [2024-10-17 16:46:12.211427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:36.123 [2024-10-17 16:46:12.211449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.753 ms 00:31:36.123 [2024-10-17 16:46:12.211461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.123 [2024-10-17 16:46:12.231164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.123 [2024-10-17 16:46:12.231219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:36.123 [2024-10-17 16:46:12.231243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.574 ms 00:31:36.123 [2024-10-17 16:46:12.231253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.123 [2024-10-17 16:46:12.250590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.123 [2024-10-17 16:46:12.250642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:36.123 [2024-10-17 16:46:12.250660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.243 ms 00:31:36.123 [2024-10-17 16:46:12.250671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.123 [2024-10-17 16:46:12.251561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.123 [2024-10-17 16:46:12.251594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:36.123 [2024-10-17 16:46:12.251612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:31:36.123 [2024-10-17 16:46:12.251623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.123 [2024-10-17 16:46:12.355995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.123 [2024-10-17 16:46:12.356063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:36.123 [2024-10-17 16:46:12.356086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.496 ms 00:31:36.123 [2024-10-17 16:46:12.356098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.123 [2024-10-17 16:46:12.368627] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:31:36.123 [2024-10-17 16:46:12.385932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.123 [2024-10-17 16:46:12.386021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:36.123 [2024-10-17 16:46:12.386038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.705 ms 00:31:36.123 [2024-10-17 16:46:12.386055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.123 [2024-10-17 16:46:12.386197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.123 [2024-10-17 16:46:12.386218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:36.124 [2024-10-17 16:46:12.386229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:36.124 [2024-10-17 16:46:12.386245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.124 [2024-10-17 16:46:12.386302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.124 [2024-10-17 16:46:12.386319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:36.124 [2024-10-17 16:46:12.386330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:36.124 [2024-10-17 16:46:12.386345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.124 [2024-10-17 16:46:12.386372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.124 [2024-10-17 16:46:12.386394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:36.124 [2024-10-17 16:46:12.386405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:36.124 [2024-10-17 16:46:12.386421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.124 [2024-10-17 16:46:12.386467] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:36.124 [2024-10-17 16:46:12.386490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.124 [2024-10-17 16:46:12.386500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:36.124 [2024-10-17 16:46:12.386516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:36.124 [2024-10-17 16:46:12.386532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.384 [2024-10-17 16:46:12.425282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.384 [2024-10-17 16:46:12.425591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:36.384 [2024-10-17 16:46:12.425624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.776 ms 00:31:36.384 [2024-10-17 16:46:12.425637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.384 [2024-10-17 16:46:12.425846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.384 [2024-10-17 16:46:12.425878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:36.384 [2024-10-17 16:46:12.425905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:36.384 [2024-10-17 16:46:12.425924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.384 [2024-10-17 16:46:12.427073] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:36.384 [2024-10-17 16:46:12.432314] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 431.661 ms, result 0 00:31:36.384 [2024-10-17 16:46:12.434025] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:36.384 Some configs were skipped because the RPC state that can call them passed over. 00:31:36.384 16:46:12 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:31:36.643 [2024-10-17 16:46:12.681267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.643 [2024-10-17 16:46:12.681344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:31:36.643 [2024-10-17 16:46:12.681362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.277 ms 00:31:36.643 [2024-10-17 16:46:12.681377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.643 [2024-10-17 16:46:12.681419] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.439 ms, result 0 00:31:36.643 true 00:31:36.643 16:46:12 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:31:36.643 [2024-10-17 16:46:12.913125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:36.643 [2024-10-17 16:46:12.913428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:31:36.643 [2024-10-17 16:46:12.913536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:31:36.643 [2024-10-17 16:46:12.913656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:36.643 [2024-10-17 16:46:12.913770] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.870 ms, result 0 00:31:36.643 true 00:31:36.643 16:46:12 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 75949 00:31:36.643 16:46:12 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 75949 ']' 00:31:36.643 16:46:12 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 75949 00:31:36.643 16:46:12 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:31:36.903 16:46:12 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:36.903 16:46:12 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75949 00:31:36.903 16:46:12 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:36.903 16:46:12 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:36.903 16:46:12 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75949' 00:31:36.903 killing process with pid 75949 00:31:36.903 16:46:12 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 75949 00:31:36.903 16:46:12 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 75949 00:31:37.842 [2024-10-17 16:46:14.108251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.842 [2024-10-17 16:46:14.108308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:37.842 [2024-10-17 16:46:14.108325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:37.842 [2024-10-17 16:46:14.108338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.842 [2024-10-17 16:46:14.108363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:31:37.842 [2024-10-17 16:46:14.112580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.842 [2024-10-17 16:46:14.112615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:37.842 [2024-10-17 16:46:14.112636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.191 ms 00:31:37.842 [2024-10-17 16:46:14.112646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.842 [2024-10-17 16:46:14.112909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.842 [2024-10-17 16:46:14.112923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:37.842 [2024-10-17 16:46:14.112936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:31:37.842 [2024-10-17 16:46:14.112947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.842 [2024-10-17 16:46:14.116356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.842 [2024-10-17 16:46:14.116401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:37.842 [2024-10-17 16:46:14.116418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.390 ms 00:31:37.842 [2024-10-17 16:46:14.116429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.842 [2024-10-17 16:46:14.122201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.842 [2024-10-17 16:46:14.122359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:37.842 [2024-10-17 16:46:14.122389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.735 ms 00:31:37.842 [2024-10-17 16:46:14.122400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:37.842 [2024-10-17 16:46:14.137615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:37.842 [2024-10-17 16:46:14.137794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:37.842 [2024-10-17 16:46:14.137892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.171 ms 00:31:37.842 [2024-10-17 16:46:14.137939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.102 [2024-10-17 16:46:14.148202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.102 [2024-10-17 16:46:14.148348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:38.102 [2024-10-17 16:46:14.148499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.179 ms 00:31:38.102 [2024-10-17 16:46:14.148542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.102 [2024-10-17 16:46:14.148736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.102 [2024-10-17 16:46:14.148866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:38.102 [2024-10-17 16:46:14.148965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:31:38.102 [2024-10-17 16:46:14.148995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.102 [2024-10-17 16:46:14.164679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.102 [2024-10-17 16:46:14.164850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:38.102 [2024-10-17 16:46:14.164945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.659 ms 00:31:38.102 [2024-10-17 16:46:14.164984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.102 [2024-10-17 16:46:14.180943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.102 [2024-10-17 16:46:14.181101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:38.102 [2024-10-17 16:46:14.181241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.902 ms 00:31:38.102 [2024-10-17 16:46:14.181279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.102 [2024-10-17 16:46:14.196391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.102 [2024-10-17 16:46:14.196554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:38.102 [2024-10-17 16:46:14.196811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.040 ms 00:31:38.102 [2024-10-17 16:46:14.196850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.102 [2024-10-17 16:46:14.211868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.102 [2024-10-17 16:46:14.212019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:38.102 [2024-10-17 16:46:14.212143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.916 ms 00:31:38.102 [2024-10-17 16:46:14.212180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.102 [2024-10-17 16:46:14.212264] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:38.102 [2024-10-17 16:46:14.212311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.212434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.212489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.212543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.212641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.212717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.212771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.212862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.212917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.212971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.213022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.213175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.213225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.213279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.213376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.213469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.213524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.213617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.213670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.213855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.213909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.213969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:38.102 [2024-10-17 16:46:14.214846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.214859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.214869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.214882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.214894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.214906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.214917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.214930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.214941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.214954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.214965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.214978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.214988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:38.103 [2024-10-17 16:46:14.215245] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:38.103 [2024-10-17 16:46:14.215265] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86c7f8b6-9b0b-494b-b85d-53cff9f6f843 00:31:38.103 [2024-10-17 16:46:14.215290] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:38.103 [2024-10-17 16:46:14.215312] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:38.103 [2024-10-17 16:46:14.215328] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:38.103 [2024-10-17 16:46:14.215344] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:38.103 [2024-10-17 16:46:14.215354] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:38.103 [2024-10-17 16:46:14.215369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:38.103 [2024-10-17 16:46:14.215380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:38.103 [2024-10-17 16:46:14.215393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:38.103 [2024-10-17 16:46:14.215403] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:38.103 [2024-10-17 16:46:14.215418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.103 [2024-10-17 16:46:14.215430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:38.103 [2024-10-17 16:46:14.215446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.167 ms 00:31:38.103 [2024-10-17 16:46:14.215457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.103 [2024-10-17 16:46:14.236309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.103 [2024-10-17 16:46:14.236471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:38.103 [2024-10-17 16:46:14.236505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.851 ms 00:31:38.103 [2024-10-17 16:46:14.236516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.103 [2024-10-17 16:46:14.237105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:38.103 [2024-10-17 16:46:14.237131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:38.103 [2024-10-17 16:46:14.237148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:31:38.103 [2024-10-17 16:46:14.237159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.103 [2024-10-17 16:46:14.308625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.103 [2024-10-17 16:46:14.308689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:38.103 [2024-10-17 16:46:14.308725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.103 [2024-10-17 16:46:14.308736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.103 [2024-10-17 16:46:14.308884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.103 [2024-10-17 16:46:14.308898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:38.103 [2024-10-17 16:46:14.308911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.103 [2024-10-17 16:46:14.308921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.103 [2024-10-17 16:46:14.308983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.103 [2024-10-17 16:46:14.308996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:38.103 [2024-10-17 16:46:14.309019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.103 [2024-10-17 16:46:14.309030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.103 [2024-10-17 16:46:14.309056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.103 [2024-10-17 16:46:14.309067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:38.103 [2024-10-17 16:46:14.309082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.103 [2024-10-17 16:46:14.309093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.362 [2024-10-17 16:46:14.434986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.362 [2024-10-17 16:46:14.435237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:38.362 [2024-10-17 16:46:14.435269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.362 [2024-10-17 16:46:14.435281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.362 [2024-10-17 16:46:14.538111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.362 [2024-10-17 16:46:14.538347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:38.362 [2024-10-17 16:46:14.538383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.362 [2024-10-17 16:46:14.538395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.362 [2024-10-17 16:46:14.538525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.362 [2024-10-17 16:46:14.538543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:38.362 [2024-10-17 16:46:14.538566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.362 [2024-10-17 16:46:14.538576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.362 [2024-10-17 16:46:14.538611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.362 [2024-10-17 16:46:14.538622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:38.362 [2024-10-17 16:46:14.538638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.362 [2024-10-17 16:46:14.538648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.362 [2024-10-17 16:46:14.538788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.362 [2024-10-17 16:46:14.538803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:38.362 [2024-10-17 16:46:14.538824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.362 [2024-10-17 16:46:14.538835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.362 [2024-10-17 16:46:14.538881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.362 [2024-10-17 16:46:14.538896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:38.362 [2024-10-17 16:46:14.538913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.362 [2024-10-17 16:46:14.538923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.362 [2024-10-17 16:46:14.538966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.362 [2024-10-17 16:46:14.538979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:38.362 [2024-10-17 16:46:14.539004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.362 [2024-10-17 16:46:14.539014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.362 [2024-10-17 16:46:14.539064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:38.363 [2024-10-17 16:46:14.539077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:38.363 [2024-10-17 16:46:14.539092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:38.363 [2024-10-17 16:46:14.539102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:38.363 [2024-10-17 16:46:14.539251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 431.671 ms, result 0 00:31:39.300 16:46:15 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:39.560 [2024-10-17 16:46:15.640878] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:31:39.560 [2024-10-17 16:46:15.641011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76016 ] 00:31:39.560 [2024-10-17 16:46:15.812759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.819 [2024-10-17 16:46:15.926585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.079 [2024-10-17 16:46:16.285648] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:40.079 [2024-10-17 16:46:16.285734] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:40.339 [2024-10-17 16:46:16.448123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.339 [2024-10-17 16:46:16.448190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:40.339 [2024-10-17 16:46:16.448207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:40.339 [2024-10-17 16:46:16.448218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.339 [2024-10-17 16:46:16.451360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.339 [2024-10-17 16:46:16.451402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:40.339 [2024-10-17 16:46:16.451415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.126 ms 00:31:40.339 [2024-10-17 16:46:16.451426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.339 [2024-10-17 16:46:16.451527] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:40.339 [2024-10-17 16:46:16.452555] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:40.339 [2024-10-17 16:46:16.452592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.339 [2024-10-17 16:46:16.452603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:40.339 [2024-10-17 16:46:16.452613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.074 ms 00:31:40.339 [2024-10-17 16:46:16.452623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.339 [2024-10-17 16:46:16.454165] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:40.339 [2024-10-17 16:46:16.473354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.340 [2024-10-17 16:46:16.473394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:40.340 [2024-10-17 16:46:16.473414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.221 ms 00:31:40.340 [2024-10-17 16:46:16.473425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.340 [2024-10-17 16:46:16.473527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.340 [2024-10-17 16:46:16.473542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:40.340 [2024-10-17 16:46:16.473554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:31:40.340 [2024-10-17 16:46:16.473564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.340 [2024-10-17 16:46:16.480337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.340 [2024-10-17 16:46:16.480506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:40.340 [2024-10-17 16:46:16.480527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.741 ms 00:31:40.340 [2024-10-17 16:46:16.480538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.340 [2024-10-17 16:46:16.480645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.340 [2024-10-17 16:46:16.480659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:40.340 [2024-10-17 16:46:16.480670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:40.340 [2024-10-17 16:46:16.480681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.340 [2024-10-17 16:46:16.480733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.340 [2024-10-17 16:46:16.480746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:40.340 [2024-10-17 16:46:16.480756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:40.340 [2024-10-17 16:46:16.480778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.340 [2024-10-17 16:46:16.480802] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:31:40.340 [2024-10-17 16:46:16.485598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.340 [2024-10-17 16:46:16.485631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:40.340 [2024-10-17 16:46:16.485644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.810 ms 00:31:40.340 [2024-10-17 16:46:16.485654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.340 [2024-10-17 16:46:16.485735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.340 [2024-10-17 16:46:16.485749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:40.340 [2024-10-17 16:46:16.485761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:40.340 [2024-10-17 16:46:16.485771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.340 [2024-10-17 16:46:16.485796] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:40.340 [2024-10-17 16:46:16.485819] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:40.340 [2024-10-17 16:46:16.485858] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:40.340 [2024-10-17 16:46:16.485876] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:40.340 [2024-10-17 16:46:16.485967] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:40.340 [2024-10-17 16:46:16.485980] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:40.340 [2024-10-17 16:46:16.485993] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:40.340 [2024-10-17 16:46:16.486006] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:40.340 [2024-10-17 16:46:16.486018] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:40.340 [2024-10-17 16:46:16.486030] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:31:40.340 [2024-10-17 16:46:16.486043] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:40.340 [2024-10-17 16:46:16.486053] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:40.340 [2024-10-17 16:46:16.486063] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:40.340 [2024-10-17 16:46:16.486074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.340 [2024-10-17 16:46:16.486084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:40.340 [2024-10-17 16:46:16.486095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:31:40.340 [2024-10-17 16:46:16.486104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.340 [2024-10-17 16:46:16.486180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.340 [2024-10-17 16:46:16.486192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:40.340 [2024-10-17 16:46:16.486202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:40.340 [2024-10-17 16:46:16.486216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.340 [2024-10-17 16:46:16.486302] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:40.340 [2024-10-17 16:46:16.486320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:40.340 [2024-10-17 16:46:16.486331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:40.340 [2024-10-17 16:46:16.486343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:40.340 [2024-10-17 16:46:16.486354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:40.340 [2024-10-17 16:46:16.486363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:40.340 [2024-10-17 16:46:16.486373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:31:40.340 [2024-10-17 16:46:16.486382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:40.340 [2024-10-17 16:46:16.486392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:31:40.340 [2024-10-17 16:46:16.486401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:40.340 [2024-10-17 16:46:16.486411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:40.340 [2024-10-17 16:46:16.486420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:31:40.340 [2024-10-17 16:46:16.486430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:40.340 [2024-10-17 16:46:16.486450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:40.340 [2024-10-17 16:46:16.486461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:31:40.340 [2024-10-17 16:46:16.486470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:40.340 [2024-10-17 16:46:16.486479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:40.340 [2024-10-17 16:46:16.486489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:31:40.340 [2024-10-17 16:46:16.486498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:40.340 [2024-10-17 16:46:16.486507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:40.340 [2024-10-17 16:46:16.486517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:31:40.340 [2024-10-17 16:46:16.486527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:40.340 [2024-10-17 16:46:16.486536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:40.340 [2024-10-17 16:46:16.486545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:31:40.340 [2024-10-17 16:46:16.486555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:40.340 [2024-10-17 16:46:16.486564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:40.340 [2024-10-17 16:46:16.486574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:31:40.340 [2024-10-17 16:46:16.486583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:40.340 [2024-10-17 16:46:16.486592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:40.340 [2024-10-17 16:46:16.486602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:31:40.340 [2024-10-17 16:46:16.486611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:40.340 [2024-10-17 16:46:16.486620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:40.340 [2024-10-17 16:46:16.486629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:31:40.340 [2024-10-17 16:46:16.486638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:40.340 [2024-10-17 16:46:16.486647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:40.340 [2024-10-17 16:46:16.486656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:31:40.340 [2024-10-17 16:46:16.486665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:40.340 [2024-10-17 16:46:16.486674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:40.340 [2024-10-17 16:46:16.486683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:31:40.340 [2024-10-17 16:46:16.486692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:40.340 [2024-10-17 16:46:16.486714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:40.340 [2024-10-17 16:46:16.486723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:31:40.340 [2024-10-17 16:46:16.486734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:40.340 [2024-10-17 16:46:16.486744] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:40.340 [2024-10-17 16:46:16.486754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:40.340 [2024-10-17 16:46:16.486764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:40.340 [2024-10-17 16:46:16.486774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:40.340 [2024-10-17 16:46:16.486784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:40.340 [2024-10-17 16:46:16.486794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:40.340 [2024-10-17 16:46:16.486803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:40.341 [2024-10-17 16:46:16.486812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:40.341 [2024-10-17 16:46:16.486821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:40.341 [2024-10-17 16:46:16.486830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:40.341 [2024-10-17 16:46:16.486841] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:40.341 [2024-10-17 16:46:16.486856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:40.341 [2024-10-17 16:46:16.486867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:31:40.341 [2024-10-17 16:46:16.486878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:31:40.341 [2024-10-17 16:46:16.486889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:31:40.341 [2024-10-17 16:46:16.486899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:31:40.341 [2024-10-17 16:46:16.486909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:31:40.341 [2024-10-17 16:46:16.486920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:31:40.341 [2024-10-17 16:46:16.486930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:31:40.341 [2024-10-17 16:46:16.486940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:31:40.341 [2024-10-17 16:46:16.486950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:31:40.341 [2024-10-17 16:46:16.486960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:31:40.341 [2024-10-17 16:46:16.486970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:31:40.341 [2024-10-17 16:46:16.486981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:31:40.341 [2024-10-17 16:46:16.486991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:31:40.341 [2024-10-17 16:46:16.487001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:31:40.341 [2024-10-17 16:46:16.487011] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:40.341 [2024-10-17 16:46:16.487021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:40.341 [2024-10-17 16:46:16.487032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:40.341 [2024-10-17 16:46:16.487042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:40.341 [2024-10-17 16:46:16.487052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:40.341 [2024-10-17 16:46:16.487064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:40.341 [2024-10-17 16:46:16.487075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.341 [2024-10-17 16:46:16.487085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:40.341 [2024-10-17 16:46:16.487095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:31:40.341 [2024-10-17 16:46:16.487109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.341 [2024-10-17 16:46:16.526037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.341 [2024-10-17 16:46:16.526098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:40.341 [2024-10-17 16:46:16.526115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.931 ms 00:31:40.341 [2024-10-17 16:46:16.526125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.341 [2024-10-17 16:46:16.526298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.341 [2024-10-17 16:46:16.526312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:40.341 [2024-10-17 16:46:16.526324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:40.341 [2024-10-17 16:46:16.526339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.341 [2024-10-17 16:46:16.586601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.341 [2024-10-17 16:46:16.586654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:40.341 [2024-10-17 16:46:16.586669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.333 ms 00:31:40.341 [2024-10-17 16:46:16.586680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.341 [2024-10-17 16:46:16.586830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.341 [2024-10-17 16:46:16.586845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:40.341 [2024-10-17 16:46:16.586856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:40.341 [2024-10-17 16:46:16.586866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.341 [2024-10-17 16:46:16.587310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.341 [2024-10-17 16:46:16.587328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:40.341 [2024-10-17 16:46:16.587339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:31:40.341 [2024-10-17 16:46:16.587350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.341 [2024-10-17 16:46:16.587474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.341 [2024-10-17 16:46:16.587491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:40.341 [2024-10-17 16:46:16.587502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:31:40.341 [2024-10-17 16:46:16.587512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.341 [2024-10-17 16:46:16.607338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.341 [2024-10-17 16:46:16.607379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:40.341 [2024-10-17 16:46:16.607394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.834 ms 00:31:40.341 [2024-10-17 16:46:16.607405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.341 [2024-10-17 16:46:16.626914] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:40.341 [2024-10-17 16:46:16.626967] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:40.341 [2024-10-17 16:46:16.626985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.341 [2024-10-17 16:46:16.626997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:40.341 [2024-10-17 16:46:16.627010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.468 ms 00:31:40.341 [2024-10-17 16:46:16.627020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.601 [2024-10-17 16:46:16.657408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.601 [2024-10-17 16:46:16.657599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:40.601 [2024-10-17 16:46:16.657623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.327 ms 00:31:40.601 [2024-10-17 16:46:16.657633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.601 [2024-10-17 16:46:16.675876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.601 [2024-10-17 16:46:16.676019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:40.601 [2024-10-17 16:46:16.676041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.160 ms 00:31:40.601 [2024-10-17 16:46:16.676052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.601 [2024-10-17 16:46:16.694422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.601 [2024-10-17 16:46:16.694461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:40.601 [2024-10-17 16:46:16.694474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.255 ms 00:31:40.601 [2024-10-17 16:46:16.694485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.601 [2024-10-17 16:46:16.695304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.601 [2024-10-17 16:46:16.695339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:40.601 [2024-10-17 16:46:16.695352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:31:40.601 [2024-10-17 16:46:16.695362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.601 [2024-10-17 16:46:16.782312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.601 [2024-10-17 16:46:16.782552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:40.601 [2024-10-17 16:46:16.782579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.059 ms 00:31:40.601 [2024-10-17 16:46:16.782592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.601 [2024-10-17 16:46:16.793757] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:31:40.601 [2024-10-17 16:46:16.810374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.601 [2024-10-17 16:46:16.810426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:40.601 [2024-10-17 16:46:16.810442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.607 ms 00:31:40.601 [2024-10-17 16:46:16.810453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.601 [2024-10-17 16:46:16.810592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.601 [2024-10-17 16:46:16.810610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:40.601 [2024-10-17 16:46:16.810621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:40.601 [2024-10-17 16:46:16.810632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.601 [2024-10-17 16:46:16.810689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.601 [2024-10-17 16:46:16.810722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:40.601 [2024-10-17 16:46:16.810734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:40.601 [2024-10-17 16:46:16.810745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.601 [2024-10-17 16:46:16.810773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.601 [2024-10-17 16:46:16.810789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:40.601 [2024-10-17 16:46:16.810802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:40.601 [2024-10-17 16:46:16.810813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.601 [2024-10-17 16:46:16.810847] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:40.601 [2024-10-17 16:46:16.810860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.601 [2024-10-17 16:46:16.810870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:40.601 [2024-10-17 16:46:16.810880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:40.601 [2024-10-17 16:46:16.810890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.601 [2024-10-17 16:46:16.849407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.601 [2024-10-17 16:46:16.849459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:40.601 [2024-10-17 16:46:16.849474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.554 ms 00:31:40.601 [2024-10-17 16:46:16.849485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.601 [2024-10-17 16:46:16.849616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.601 [2024-10-17 16:46:16.849630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:40.601 [2024-10-17 16:46:16.849641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:31:40.601 [2024-10-17 16:46:16.849652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.601 [2024-10-17 16:46:16.850606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:40.601 [2024-10-17 16:46:16.855049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 402.805 ms, result 0 00:31:40.601 [2024-10-17 16:46:16.855898] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:40.601 [2024-10-17 16:46:16.874915] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:41.974  [2024-10-17T16:46:19.210Z] Copying: 31/256 [MB] (31 MBps) [2024-10-17T16:46:20.146Z] Copying: 60/256 [MB] (28 MBps) [2024-10-17T16:46:21.084Z] Copying: 90/256 [MB] (30 MBps) [2024-10-17T16:46:22.020Z] Copying: 118/256 [MB] (28 MBps) [2024-10-17T16:46:23.086Z] Copying: 147/256 [MB] (28 MBps) [2024-10-17T16:46:24.023Z] Copying: 176/256 [MB] (29 MBps) [2024-10-17T16:46:24.960Z] Copying: 207/256 [MB] (30 MBps) [2024-10-17T16:46:25.529Z] Copying: 238/256 [MB] (30 MBps) [2024-10-17T16:46:26.099Z] Copying: 256/256 [MB] (average 29 MBps)[2024-10-17 16:46:25.811724] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:49.800 [2024-10-17 16:46:25.827238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.800 [2024-10-17 16:46:25.827427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:49.800 [2024-10-17 16:46:25.827551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:49.800 [2024-10-17 16:46:25.827591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.800 [2024-10-17 16:46:25.827656] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:31:49.800 [2024-10-17 16:46:25.831981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.800 [2024-10-17 16:46:25.832133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:49.800 [2024-10-17 16:46:25.832213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.255 ms 00:31:49.800 [2024-10-17 16:46:25.832249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.800 [2024-10-17 16:46:25.832533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.800 [2024-10-17 16:46:25.832610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:49.800 [2024-10-17 16:46:25.832683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:31:49.800 [2024-10-17 16:46:25.832729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.800 [2024-10-17 16:46:25.836260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.800 [2024-10-17 16:46:25.836686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:49.800 [2024-10-17 16:46:25.836714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.495 ms 00:31:49.800 [2024-10-17 16:46:25.836731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.800 [2024-10-17 16:46:25.842691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.800 [2024-10-17 16:46:25.842728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:49.800 [2024-10-17 16:46:25.842741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.940 ms 00:31:49.800 [2024-10-17 16:46:25.842751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.800 [2024-10-17 16:46:25.883089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.800 [2024-10-17 16:46:25.883155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:49.800 [2024-10-17 16:46:25.883172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.320 ms 00:31:49.800 [2024-10-17 16:46:25.883184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.800 [2024-10-17 16:46:25.904643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.800 [2024-10-17 16:46:25.904715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:49.800 [2024-10-17 16:46:25.904734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.401 ms 00:31:49.800 [2024-10-17 16:46:25.904754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.800 [2024-10-17 16:46:25.904917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.800 [2024-10-17 16:46:25.904932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:49.800 [2024-10-17 16:46:25.904944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:31:49.800 [2024-10-17 16:46:25.904954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.800 [2024-10-17 16:46:25.942842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.800 [2024-10-17 16:46:25.943084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:49.800 [2024-10-17 16:46:25.943111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.914 ms 00:31:49.800 [2024-10-17 16:46:25.943122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.800 [2024-10-17 16:46:25.981135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.800 [2024-10-17 16:46:25.981194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:49.800 [2024-10-17 16:46:25.981210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.985 ms 00:31:49.800 [2024-10-17 16:46:25.981221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.800 [2024-10-17 16:46:26.017844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.800 [2024-10-17 16:46:26.017899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:49.800 [2024-10-17 16:46:26.017915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.605 ms 00:31:49.800 [2024-10-17 16:46:26.017925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.800 [2024-10-17 16:46:26.054256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.800 [2024-10-17 16:46:26.054315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:49.800 [2024-10-17 16:46:26.054331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.288 ms 00:31:49.800 [2024-10-17 16:46:26.054341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.800 [2024-10-17 16:46:26.054419] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:49.800 [2024-10-17 16:46:26.054440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:49.800 [2024-10-17 16:46:26.054460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:49.800 [2024-10-17 16:46:26.054472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:49.800 [2024-10-17 16:46:26.054483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:49.800 [2024-10-17 16:46:26.054494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:49.800 [2024-10-17 16:46:26.054505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:49.800 [2024-10-17 16:46:26.054516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:49.800 [2024-10-17 16:46:26.054527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:49.800 [2024-10-17 16:46:26.054538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:49.800 [2024-10-17 16:46:26.054548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:49.800 [2024-10-17 16:46:26.054559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:49.800 [2024-10-17 16:46:26.054570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:49.800 [2024-10-17 16:46:26.054580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.054996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:49.801 [2024-10-17 16:46:26.055564] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:49.801 [2024-10-17 16:46:26.055574] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 86c7f8b6-9b0b-494b-b85d-53cff9f6f843 00:31:49.801 [2024-10-17 16:46:26.055584] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:49.801 [2024-10-17 16:46:26.055594] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:49.801 [2024-10-17 16:46:26.055605] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:49.801 [2024-10-17 16:46:26.055616] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:49.801 [2024-10-17 16:46:26.055626] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:49.801 [2024-10-17 16:46:26.055636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:49.801 [2024-10-17 16:46:26.055646] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:49.801 [2024-10-17 16:46:26.055654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:49.801 [2024-10-17 16:46:26.055664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:49.802 [2024-10-17 16:46:26.055673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.802 [2024-10-17 16:46:26.055684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:49.802 [2024-10-17 16:46:26.055695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:31:49.802 [2024-10-17 16:46:26.055720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.802 [2024-10-17 16:46:26.076230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.802 [2024-10-17 16:46:26.076283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:49.802 [2024-10-17 16:46:26.076300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.518 ms 00:31:49.802 [2024-10-17 16:46:26.076310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:49.802 [2024-10-17 16:46:26.076914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:49.802 [2024-10-17 16:46:26.076928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:49.802 [2024-10-17 16:46:26.076948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:31:49.802 [2024-10-17 16:46:26.076958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.061 [2024-10-17 16:46:26.131353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.061 [2024-10-17 16:46:26.131416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:50.061 [2024-10-17 16:46:26.131431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.061 [2024-10-17 16:46:26.131442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.061 [2024-10-17 16:46:26.131547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.061 [2024-10-17 16:46:26.131560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:50.061 [2024-10-17 16:46:26.131577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.061 [2024-10-17 16:46:26.131592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.061 [2024-10-17 16:46:26.131646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.061 [2024-10-17 16:46:26.131659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:50.061 [2024-10-17 16:46:26.131670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.061 [2024-10-17 16:46:26.131680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.061 [2024-10-17 16:46:26.131716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.061 [2024-10-17 16:46:26.131728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:50.061 [2024-10-17 16:46:26.131738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.061 [2024-10-17 16:46:26.131752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.061 [2024-10-17 16:46:26.257671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.061 [2024-10-17 16:46:26.257741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:50.061 [2024-10-17 16:46:26.257756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.061 [2024-10-17 16:46:26.257766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.320 [2024-10-17 16:46:26.359791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.320 [2024-10-17 16:46:26.359856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:50.320 [2024-10-17 16:46:26.359879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.320 [2024-10-17 16:46:26.359890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.320 [2024-10-17 16:46:26.359981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.320 [2024-10-17 16:46:26.359994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:50.320 [2024-10-17 16:46:26.360005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.320 [2024-10-17 16:46:26.360015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.320 [2024-10-17 16:46:26.360043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.320 [2024-10-17 16:46:26.360054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:50.320 [2024-10-17 16:46:26.360064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.320 [2024-10-17 16:46:26.360075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.320 [2024-10-17 16:46:26.360182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.320 [2024-10-17 16:46:26.360195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:50.320 [2024-10-17 16:46:26.360205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.320 [2024-10-17 16:46:26.360215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.320 [2024-10-17 16:46:26.360252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.320 [2024-10-17 16:46:26.360264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:50.320 [2024-10-17 16:46:26.360274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.320 [2024-10-17 16:46:26.360284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.320 [2024-10-17 16:46:26.360328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.320 [2024-10-17 16:46:26.360339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:50.320 [2024-10-17 16:46:26.360349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.320 [2024-10-17 16:46:26.360359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.320 [2024-10-17 16:46:26.360431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:50.320 [2024-10-17 16:46:26.360445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:50.320 [2024-10-17 16:46:26.360455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:50.320 [2024-10-17 16:46:26.360466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:50.320 [2024-10-17 16:46:26.360614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 534.254 ms, result 0 00:31:51.256 00:31:51.256 00:31:51.256 16:46:27 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:51.824 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:31:51.824 16:46:27 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:31:51.824 16:46:27 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:31:51.824 16:46:27 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:51.824 16:46:27 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:51.824 16:46:27 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:31:51.824 16:46:27 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:31:51.824 16:46:27 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 75949 00:31:51.824 16:46:27 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 75949 ']' 00:31:51.824 16:46:27 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 75949 00:31:51.824 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (75949) - No such process 00:31:51.824 Process with pid 75949 is not found 00:31:51.824 16:46:27 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 75949 is not found' 00:31:51.824 00:31:51.824 real 1m6.999s 00:31:51.824 user 1m33.840s 00:31:51.824 sys 0m6.957s 00:31:51.824 ************************************ 00:31:51.824 END TEST ftl_trim 00:31:51.824 ************************************ 00:31:51.824 16:46:27 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:51.824 16:46:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:31:51.825 16:46:28 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:31:51.825 16:46:28 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:51.825 16:46:28 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:51.825 16:46:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:51.825 ************************************ 00:31:51.825 START TEST ftl_restore 00:31:51.825 ************************************ 00:31:51.825 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:31:52.084 * Looking for test storage... 00:31:52.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:52.084 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:52.084 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:31:52.084 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:52.084 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:52.084 16:46:28 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:31:52.084 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:52.084 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:52.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.084 --rc genhtml_branch_coverage=1 00:31:52.084 --rc genhtml_function_coverage=1 00:31:52.084 --rc genhtml_legend=1 00:31:52.084 --rc geninfo_all_blocks=1 00:31:52.084 --rc geninfo_unexecuted_blocks=1 00:31:52.084 00:31:52.084 ' 00:31:52.084 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:52.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.084 --rc genhtml_branch_coverage=1 00:31:52.084 --rc genhtml_function_coverage=1 00:31:52.084 --rc genhtml_legend=1 00:31:52.084 --rc geninfo_all_blocks=1 00:31:52.084 --rc geninfo_unexecuted_blocks=1 00:31:52.084 00:31:52.084 ' 00:31:52.084 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:52.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.084 --rc genhtml_branch_coverage=1 00:31:52.084 --rc genhtml_function_coverage=1 00:31:52.084 --rc genhtml_legend=1 00:31:52.084 --rc geninfo_all_blocks=1 00:31:52.084 --rc geninfo_unexecuted_blocks=1 00:31:52.084 00:31:52.084 ' 00:31:52.084 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:52.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.084 --rc genhtml_branch_coverage=1 00:31:52.084 --rc genhtml_function_coverage=1 00:31:52.084 --rc genhtml_legend=1 00:31:52.084 --rc geninfo_all_blocks=1 00:31:52.084 --rc geninfo_unexecuted_blocks=1 00:31:52.084 00:31:52.084 ' 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:52.084 16:46:28 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.hFCPiCl4WH 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76209 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:52.085 16:46:28 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76209 00:31:52.085 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 76209 ']' 00:31:52.085 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.085 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:52.085 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.085 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:52.085 16:46:28 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:31:52.344 [2024-10-17 16:46:28.480855] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:31:52.344 [2024-10-17 16:46:28.481173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76209 ] 00:31:52.603 [2024-10-17 16:46:28.655599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.603 [2024-10-17 16:46:28.778646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.539 16:46:29 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:53.539 16:46:29 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:31:53.539 16:46:29 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:53.539 16:46:29 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:31:53.539 16:46:29 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:53.539 16:46:29 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:31:53.539 16:46:29 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:31:53.539 16:46:29 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:53.796 16:46:29 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:53.797 16:46:29 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:31:53.797 16:46:29 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:53.797 16:46:29 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:31:53.797 16:46:29 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:53.797 16:46:29 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:31:53.797 16:46:29 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:31:53.797 16:46:29 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:54.054 16:46:30 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:54.054 { 00:31:54.054 "name": "nvme0n1", 00:31:54.054 "aliases": [ 00:31:54.054 "3c9b12a6-8862-408e-acad-0c9311c68135" 00:31:54.054 ], 00:31:54.054 "product_name": "NVMe disk", 00:31:54.054 "block_size": 4096, 00:31:54.054 "num_blocks": 1310720, 00:31:54.054 "uuid": "3c9b12a6-8862-408e-acad-0c9311c68135", 00:31:54.054 "numa_id": -1, 00:31:54.054 "assigned_rate_limits": { 00:31:54.054 "rw_ios_per_sec": 0, 00:31:54.054 "rw_mbytes_per_sec": 0, 00:31:54.054 "r_mbytes_per_sec": 0, 00:31:54.054 "w_mbytes_per_sec": 0 00:31:54.054 }, 00:31:54.054 "claimed": true, 00:31:54.054 "claim_type": "read_many_write_one", 00:31:54.054 "zoned": false, 00:31:54.054 "supported_io_types": { 00:31:54.054 "read": true, 00:31:54.054 "write": true, 00:31:54.054 "unmap": true, 00:31:54.054 "flush": true, 00:31:54.054 "reset": true, 00:31:54.054 "nvme_admin": true, 00:31:54.054 "nvme_io": true, 00:31:54.054 "nvme_io_md": false, 00:31:54.054 "write_zeroes": true, 00:31:54.054 "zcopy": false, 00:31:54.054 "get_zone_info": false, 00:31:54.054 "zone_management": false, 00:31:54.054 "zone_append": false, 00:31:54.054 "compare": true, 00:31:54.054 "compare_and_write": false, 00:31:54.054 "abort": true, 00:31:54.054 "seek_hole": false, 00:31:54.054 "seek_data": false, 00:31:54.054 "copy": true, 00:31:54.054 "nvme_iov_md": false 00:31:54.054 }, 00:31:54.054 "driver_specific": { 00:31:54.054 "nvme": [ 00:31:54.054 { 00:31:54.054 "pci_address": "0000:00:11.0", 00:31:54.054 "trid": { 00:31:54.054 "trtype": "PCIe", 00:31:54.054 "traddr": "0000:00:11.0" 00:31:54.054 }, 00:31:54.054 "ctrlr_data": { 00:31:54.054 "cntlid": 0, 00:31:54.054 "vendor_id": "0x1b36", 00:31:54.055 "model_number": "QEMU NVMe Ctrl", 00:31:54.055 "serial_number": "12341", 00:31:54.055 "firmware_revision": "8.0.0", 00:31:54.055 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:54.055 "oacs": { 00:31:54.055 "security": 0, 00:31:54.055 "format": 1, 00:31:54.055 "firmware": 0, 00:31:54.055 "ns_manage": 1 00:31:54.055 }, 00:31:54.055 "multi_ctrlr": false, 00:31:54.055 "ana_reporting": false 00:31:54.055 }, 00:31:54.055 "vs": { 00:31:54.055 "nvme_version": "1.4" 00:31:54.055 }, 00:31:54.055 "ns_data": { 00:31:54.055 "id": 1, 00:31:54.055 "can_share": false 00:31:54.055 } 00:31:54.055 } 00:31:54.055 ], 00:31:54.055 "mp_policy": "active_passive" 00:31:54.055 } 00:31:54.055 } 00:31:54.055 ]' 00:31:54.055 16:46:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:54.055 16:46:30 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:31:54.055 16:46:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:54.055 16:46:30 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:31:54.055 16:46:30 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:31:54.055 16:46:30 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:31:54.055 16:46:30 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:31:54.055 16:46:30 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:54.055 16:46:30 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:31:54.055 16:46:30 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:54.055 16:46:30 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:54.356 16:46:30 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=49615d30-6c7c-4e2f-9373-7fb7ee87cb60 00:31:54.356 16:46:30 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:31:54.356 16:46:30 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 49615d30-6c7c-4e2f-9373-7fb7ee87cb60 00:31:54.634 16:46:30 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:54.892 16:46:31 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=a3531b8f-b1d3-46b8-a3c3-c8b159d3cdc2 00:31:54.892 16:46:31 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a3531b8f-b1d3-46b8-a3c3-c8b159d3cdc2 00:31:55.151 16:46:31 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=5e66c87a-e1de-46db-afce-f1e5a5cb1dd8 00:31:55.151 16:46:31 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:55.151 16:46:31 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5e66c87a-e1de-46db-afce-f1e5a5cb1dd8 00:31:55.151 16:46:31 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:31:55.151 16:46:31 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:55.151 16:46:31 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=5e66c87a-e1de-46db-afce-f1e5a5cb1dd8 00:31:55.151 16:46:31 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:31:55.152 16:46:31 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 5e66c87a-e1de-46db-afce-f1e5a5cb1dd8 00:31:55.152 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=5e66c87a-e1de-46db-afce-f1e5a5cb1dd8 00:31:55.152 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:55.152 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:31:55.152 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:31:55.152 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e66c87a-e1de-46db-afce-f1e5a5cb1dd8 00:31:55.411 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:55.411 { 00:31:55.411 "name": "5e66c87a-e1de-46db-afce-f1e5a5cb1dd8", 00:31:55.411 "aliases": [ 00:31:55.411 "lvs/nvme0n1p0" 00:31:55.411 ], 00:31:55.411 "product_name": "Logical Volume", 00:31:55.411 "block_size": 4096, 00:31:55.411 "num_blocks": 26476544, 00:31:55.411 "uuid": "5e66c87a-e1de-46db-afce-f1e5a5cb1dd8", 00:31:55.411 "assigned_rate_limits": { 00:31:55.411 "rw_ios_per_sec": 0, 00:31:55.411 "rw_mbytes_per_sec": 0, 00:31:55.411 "r_mbytes_per_sec": 0, 00:31:55.411 "w_mbytes_per_sec": 0 00:31:55.411 }, 00:31:55.411 "claimed": false, 00:31:55.411 "zoned": false, 00:31:55.411 "supported_io_types": { 00:31:55.411 "read": true, 00:31:55.411 "write": true, 00:31:55.411 "unmap": true, 00:31:55.411 "flush": false, 00:31:55.411 "reset": true, 00:31:55.411 "nvme_admin": false, 00:31:55.411 "nvme_io": false, 00:31:55.411 "nvme_io_md": false, 00:31:55.411 "write_zeroes": true, 00:31:55.411 "zcopy": false, 00:31:55.411 "get_zone_info": false, 00:31:55.411 "zone_management": false, 00:31:55.411 "zone_append": false, 00:31:55.411 "compare": false, 00:31:55.411 "compare_and_write": false, 00:31:55.411 "abort": false, 00:31:55.411 "seek_hole": true, 00:31:55.411 "seek_data": true, 00:31:55.411 "copy": false, 00:31:55.411 "nvme_iov_md": false 00:31:55.411 }, 00:31:55.411 "driver_specific": { 00:31:55.411 "lvol": { 00:31:55.411 "lvol_store_uuid": "a3531b8f-b1d3-46b8-a3c3-c8b159d3cdc2", 00:31:55.411 "base_bdev": "nvme0n1", 00:31:55.411 "thin_provision": true, 00:31:55.411 "num_allocated_clusters": 0, 00:31:55.411 "snapshot": false, 00:31:55.411 "clone": false, 00:31:55.411 "esnap_clone": false 00:31:55.411 } 00:31:55.411 } 00:31:55.411 } 00:31:55.411 ]' 00:31:55.411 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:55.411 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:31:55.411 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:55.411 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:55.411 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:55.411 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:31:55.411 16:46:31 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:31:55.411 16:46:31 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:31:55.411 16:46:31 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:55.669 16:46:31 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:55.669 16:46:31 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:55.669 16:46:31 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 5e66c87a-e1de-46db-afce-f1e5a5cb1dd8 00:31:55.669 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=5e66c87a-e1de-46db-afce-f1e5a5cb1dd8 00:31:55.669 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:55.669 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:31:55.669 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:31:55.669 16:46:31 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e66c87a-e1de-46db-afce-f1e5a5cb1dd8 00:31:55.927 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:55.927 { 00:31:55.927 "name": "5e66c87a-e1de-46db-afce-f1e5a5cb1dd8", 00:31:55.927 "aliases": [ 00:31:55.927 "lvs/nvme0n1p0" 00:31:55.927 ], 00:31:55.927 "product_name": "Logical Volume", 00:31:55.927 "block_size": 4096, 00:31:55.927 "num_blocks": 26476544, 00:31:55.927 "uuid": "5e66c87a-e1de-46db-afce-f1e5a5cb1dd8", 00:31:55.927 "assigned_rate_limits": { 00:31:55.927 "rw_ios_per_sec": 0, 00:31:55.927 "rw_mbytes_per_sec": 0, 00:31:55.927 "r_mbytes_per_sec": 0, 00:31:55.927 "w_mbytes_per_sec": 0 00:31:55.927 }, 00:31:55.927 "claimed": false, 00:31:55.927 "zoned": false, 00:31:55.927 "supported_io_types": { 00:31:55.927 "read": true, 00:31:55.927 "write": true, 00:31:55.927 "unmap": true, 00:31:55.927 "flush": false, 00:31:55.927 "reset": true, 00:31:55.927 "nvme_admin": false, 00:31:55.927 "nvme_io": false, 00:31:55.927 "nvme_io_md": false, 00:31:55.927 "write_zeroes": true, 00:31:55.927 "zcopy": false, 00:31:55.927 "get_zone_info": false, 00:31:55.927 "zone_management": false, 00:31:55.927 "zone_append": false, 00:31:55.927 "compare": false, 00:31:55.927 "compare_and_write": false, 00:31:55.927 "abort": false, 00:31:55.927 "seek_hole": true, 00:31:55.927 "seek_data": true, 00:31:55.927 "copy": false, 00:31:55.927 "nvme_iov_md": false 00:31:55.927 }, 00:31:55.927 "driver_specific": { 00:31:55.927 "lvol": { 00:31:55.927 "lvol_store_uuid": "a3531b8f-b1d3-46b8-a3c3-c8b159d3cdc2", 00:31:55.927 "base_bdev": "nvme0n1", 00:31:55.927 "thin_provision": true, 00:31:55.927 "num_allocated_clusters": 0, 00:31:55.927 "snapshot": false, 00:31:55.927 "clone": false, 00:31:55.927 "esnap_clone": false 00:31:55.927 } 00:31:55.927 } 00:31:55.927 } 00:31:55.927 ]' 00:31:55.927 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:55.927 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:31:55.927 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:55.928 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:55.928 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:55.928 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:31:55.928 16:46:32 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:31:55.928 16:46:32 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:56.186 16:46:32 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:56.186 16:46:32 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 5e66c87a-e1de-46db-afce-f1e5a5cb1dd8 00:31:56.186 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=5e66c87a-e1de-46db-afce-f1e5a5cb1dd8 00:31:56.186 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:56.186 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:31:56.186 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:31:56.186 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5e66c87a-e1de-46db-afce-f1e5a5cb1dd8 00:31:56.444 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:56.444 { 00:31:56.444 "name": "5e66c87a-e1de-46db-afce-f1e5a5cb1dd8", 00:31:56.444 "aliases": [ 00:31:56.444 "lvs/nvme0n1p0" 00:31:56.444 ], 00:31:56.444 "product_name": "Logical Volume", 00:31:56.444 "block_size": 4096, 00:31:56.444 "num_blocks": 26476544, 00:31:56.444 "uuid": "5e66c87a-e1de-46db-afce-f1e5a5cb1dd8", 00:31:56.444 "assigned_rate_limits": { 00:31:56.444 "rw_ios_per_sec": 0, 00:31:56.444 "rw_mbytes_per_sec": 0, 00:31:56.444 "r_mbytes_per_sec": 0, 00:31:56.444 "w_mbytes_per_sec": 0 00:31:56.444 }, 00:31:56.444 "claimed": false, 00:31:56.444 "zoned": false, 00:31:56.444 "supported_io_types": { 00:31:56.444 "read": true, 00:31:56.444 "write": true, 00:31:56.444 "unmap": true, 00:31:56.444 "flush": false, 00:31:56.444 "reset": true, 00:31:56.444 "nvme_admin": false, 00:31:56.444 "nvme_io": false, 00:31:56.444 "nvme_io_md": false, 00:31:56.444 "write_zeroes": true, 00:31:56.444 "zcopy": false, 00:31:56.444 "get_zone_info": false, 00:31:56.444 "zone_management": false, 00:31:56.444 "zone_append": false, 00:31:56.444 "compare": false, 00:31:56.444 "compare_and_write": false, 00:31:56.444 "abort": false, 00:31:56.444 "seek_hole": true, 00:31:56.444 "seek_data": true, 00:31:56.444 "copy": false, 00:31:56.444 "nvme_iov_md": false 00:31:56.444 }, 00:31:56.444 "driver_specific": { 00:31:56.444 "lvol": { 00:31:56.444 "lvol_store_uuid": "a3531b8f-b1d3-46b8-a3c3-c8b159d3cdc2", 00:31:56.444 "base_bdev": "nvme0n1", 00:31:56.444 "thin_provision": true, 00:31:56.444 "num_allocated_clusters": 0, 00:31:56.444 "snapshot": false, 00:31:56.444 "clone": false, 00:31:56.444 "esnap_clone": false 00:31:56.444 } 00:31:56.444 } 00:31:56.444 } 00:31:56.444 ]' 00:31:56.444 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:56.444 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:31:56.444 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:56.444 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:56.444 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:56.444 16:46:32 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:31:56.444 16:46:32 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:56.444 16:46:32 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 5e66c87a-e1de-46db-afce-f1e5a5cb1dd8 --l2p_dram_limit 10' 00:31:56.444 16:46:32 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:56.444 16:46:32 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:56.444 16:46:32 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:56.444 16:46:32 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:31:56.444 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:31:56.444 16:46:32 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5e66c87a-e1de-46db-afce-f1e5a5cb1dd8 --l2p_dram_limit 10 -c nvc0n1p0 00:31:56.704 [2024-10-17 16:46:32.846009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.704 [2024-10-17 16:46:32.846074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:56.704 [2024-10-17 16:46:32.846095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:56.704 [2024-10-17 16:46:32.846106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.704 [2024-10-17 16:46:32.846173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.704 [2024-10-17 16:46:32.846189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:56.704 [2024-10-17 16:46:32.846203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:56.704 [2024-10-17 16:46:32.846213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.704 [2024-10-17 16:46:32.846245] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:56.704 [2024-10-17 16:46:32.847376] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:56.704 [2024-10-17 16:46:32.847418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.704 [2024-10-17 16:46:32.847430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:56.704 [2024-10-17 16:46:32.847447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.182 ms 00:31:56.704 [2024-10-17 16:46:32.847457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.704 [2024-10-17 16:46:32.847587] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID dadd3054-484d-499b-9d35-ce881e09f580 00:31:56.704 [2024-10-17 16:46:32.849070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.704 [2024-10-17 16:46:32.849110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:56.704 [2024-10-17 16:46:32.849123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:56.704 [2024-10-17 16:46:32.849138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.704 [2024-10-17 16:46:32.856572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.704 [2024-10-17 16:46:32.856609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:56.704 [2024-10-17 16:46:32.856622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.382 ms 00:31:56.704 [2024-10-17 16:46:32.856635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.704 [2024-10-17 16:46:32.856764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.704 [2024-10-17 16:46:32.856783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:56.704 [2024-10-17 16:46:32.856796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:31:56.704 [2024-10-17 16:46:32.856813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.704 [2024-10-17 16:46:32.856895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.704 [2024-10-17 16:46:32.856913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:56.704 [2024-10-17 16:46:32.856924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:56.704 [2024-10-17 16:46:32.856936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.704 [2024-10-17 16:46:32.856964] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:56.704 [2024-10-17 16:46:32.862021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.704 [2024-10-17 16:46:32.862175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:56.704 [2024-10-17 16:46:32.862203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.071 ms 00:31:56.704 [2024-10-17 16:46:32.862217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.704 [2024-10-17 16:46:32.862263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.704 [2024-10-17 16:46:32.862274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:56.704 [2024-10-17 16:46:32.862286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:56.704 [2024-10-17 16:46:32.862297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.704 [2024-10-17 16:46:32.862349] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:56.704 [2024-10-17 16:46:32.862479] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:56.704 [2024-10-17 16:46:32.862499] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:56.704 [2024-10-17 16:46:32.862513] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:56.704 [2024-10-17 16:46:32.862529] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:56.704 [2024-10-17 16:46:32.862541] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:56.704 [2024-10-17 16:46:32.862555] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:56.704 [2024-10-17 16:46:32.862565] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:56.704 [2024-10-17 16:46:32.862578] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:56.704 [2024-10-17 16:46:32.862589] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:56.704 [2024-10-17 16:46:32.862602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.704 [2024-10-17 16:46:32.862615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:56.704 [2024-10-17 16:46:32.862628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:31:56.704 [2024-10-17 16:46:32.862649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.704 [2024-10-17 16:46:32.862743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.704 [2024-10-17 16:46:32.862755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:56.704 [2024-10-17 16:46:32.862769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:31:56.704 [2024-10-17 16:46:32.862778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.704 [2024-10-17 16:46:32.862867] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:56.704 [2024-10-17 16:46:32.862879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:56.704 [2024-10-17 16:46:32.862896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:56.704 [2024-10-17 16:46:32.862907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:56.704 [2024-10-17 16:46:32.862919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:56.705 [2024-10-17 16:46:32.862929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:56.705 [2024-10-17 16:46:32.862941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:56.705 [2024-10-17 16:46:32.862950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:56.705 [2024-10-17 16:46:32.862962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:56.705 [2024-10-17 16:46:32.862971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:56.705 [2024-10-17 16:46:32.862983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:56.705 [2024-10-17 16:46:32.862992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:56.705 [2024-10-17 16:46:32.863003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:56.705 [2024-10-17 16:46:32.863015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:56.705 [2024-10-17 16:46:32.863027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:56.705 [2024-10-17 16:46:32.863037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:56.705 [2024-10-17 16:46:32.863051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:56.705 [2024-10-17 16:46:32.863060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:56.705 [2024-10-17 16:46:32.863074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:56.705 [2024-10-17 16:46:32.863084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:56.705 [2024-10-17 16:46:32.863095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:56.705 [2024-10-17 16:46:32.863104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:56.705 [2024-10-17 16:46:32.863115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:56.705 [2024-10-17 16:46:32.863125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:56.705 [2024-10-17 16:46:32.863136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:56.705 [2024-10-17 16:46:32.863145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:56.705 [2024-10-17 16:46:32.863156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:56.705 [2024-10-17 16:46:32.863165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:56.705 [2024-10-17 16:46:32.863177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:56.705 [2024-10-17 16:46:32.863186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:56.705 [2024-10-17 16:46:32.863198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:56.705 [2024-10-17 16:46:32.863207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:56.705 [2024-10-17 16:46:32.863221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:56.705 [2024-10-17 16:46:32.863230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:56.705 [2024-10-17 16:46:32.863242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:56.705 [2024-10-17 16:46:32.863251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:56.705 [2024-10-17 16:46:32.863262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:56.705 [2024-10-17 16:46:32.863271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:56.705 [2024-10-17 16:46:32.863282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:56.705 [2024-10-17 16:46:32.863291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:56.705 [2024-10-17 16:46:32.863303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:56.705 [2024-10-17 16:46:32.863312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:56.705 [2024-10-17 16:46:32.863323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:56.705 [2024-10-17 16:46:32.863332] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:56.705 [2024-10-17 16:46:32.863345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:56.705 [2024-10-17 16:46:32.863354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:56.705 [2024-10-17 16:46:32.863368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:56.705 [2024-10-17 16:46:32.863379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:56.705 [2024-10-17 16:46:32.863393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:56.705 [2024-10-17 16:46:32.863403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:56.705 [2024-10-17 16:46:32.863415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:56.705 [2024-10-17 16:46:32.863424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:56.705 [2024-10-17 16:46:32.863436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:56.705 [2024-10-17 16:46:32.863450] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:56.705 [2024-10-17 16:46:32.863466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:56.705 [2024-10-17 16:46:32.863477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:56.705 [2024-10-17 16:46:32.863490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:56.705 [2024-10-17 16:46:32.863501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:56.705 [2024-10-17 16:46:32.863514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:56.705 [2024-10-17 16:46:32.863524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:56.705 [2024-10-17 16:46:32.863537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:56.705 [2024-10-17 16:46:32.863547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:56.705 [2024-10-17 16:46:32.863559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:56.705 [2024-10-17 16:46:32.863569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:56.705 [2024-10-17 16:46:32.863585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:56.705 [2024-10-17 16:46:32.863596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:56.705 [2024-10-17 16:46:32.863609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:56.705 [2024-10-17 16:46:32.863619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:56.705 [2024-10-17 16:46:32.863631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:56.705 [2024-10-17 16:46:32.863641] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:56.705 [2024-10-17 16:46:32.863657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:56.705 [2024-10-17 16:46:32.863671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:56.705 [2024-10-17 16:46:32.863684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:56.705 [2024-10-17 16:46:32.863694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:56.705 [2024-10-17 16:46:32.863717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:56.705 [2024-10-17 16:46:32.863727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.705 [2024-10-17 16:46:32.863742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:56.705 [2024-10-17 16:46:32.863752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.920 ms 00:31:56.705 [2024-10-17 16:46:32.863765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.705 [2024-10-17 16:46:32.863807] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:56.705 [2024-10-17 16:46:32.863829] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:59.996 [2024-10-17 16:46:36.067484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.996 [2024-10-17 16:46:36.067557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:59.996 [2024-10-17 16:46:36.067575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3208.876 ms 00:31:59.996 [2024-10-17 16:46:36.067588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.996 [2024-10-17 16:46:36.102791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.996 [2024-10-17 16:46:36.103052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:59.996 [2024-10-17 16:46:36.103078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.941 ms 00:31:59.996 [2024-10-17 16:46:36.103091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.996 [2024-10-17 16:46:36.103251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.996 [2024-10-17 16:46:36.103267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:59.996 [2024-10-17 16:46:36.103278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:31:59.996 [2024-10-17 16:46:36.103294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.996 [2024-10-17 16:46:36.143721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.996 [2024-10-17 16:46:36.143770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:59.996 [2024-10-17 16:46:36.143784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.449 ms 00:31:59.996 [2024-10-17 16:46:36.143797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.996 [2024-10-17 16:46:36.143844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.996 [2024-10-17 16:46:36.143859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:59.996 [2024-10-17 16:46:36.143870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:59.996 [2024-10-17 16:46:36.143886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.996 [2024-10-17 16:46:36.144402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.996 [2024-10-17 16:46:36.144421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:59.996 [2024-10-17 16:46:36.144432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:31:59.996 [2024-10-17 16:46:36.144444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.996 [2024-10-17 16:46:36.144547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.996 [2024-10-17 16:46:36.144561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:59.996 [2024-10-17 16:46:36.144572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:31:59.996 [2024-10-17 16:46:36.144587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.996 [2024-10-17 16:46:36.164968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.996 [2024-10-17 16:46:36.165018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:59.996 [2024-10-17 16:46:36.165033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.390 ms 00:31:59.996 [2024-10-17 16:46:36.165049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.996 [2024-10-17 16:46:36.177736] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:59.996 [2024-10-17 16:46:36.181055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.996 [2024-10-17 16:46:36.181089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:59.996 [2024-10-17 16:46:36.181106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.922 ms 00:31:59.996 [2024-10-17 16:46:36.181116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.996 [2024-10-17 16:46:36.289496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.996 [2024-10-17 16:46:36.289817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:59.996 [2024-10-17 16:46:36.289851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.501 ms 00:31:59.996 [2024-10-17 16:46:36.289863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:59.996 [2024-10-17 16:46:36.290078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:59.996 [2024-10-17 16:46:36.290092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:59.996 [2024-10-17 16:46:36.290109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:31:59.996 [2024-10-17 16:46:36.290123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.256 [2024-10-17 16:46:36.327893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.256 [2024-10-17 16:46:36.327954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:32:00.256 [2024-10-17 16:46:36.327975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.765 ms 00:32:00.256 [2024-10-17 16:46:36.327986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.256 [2024-10-17 16:46:36.366665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.256 [2024-10-17 16:46:36.366731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:32:00.256 [2024-10-17 16:46:36.366751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.677 ms 00:32:00.256 [2024-10-17 16:46:36.366761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.256 [2024-10-17 16:46:36.367454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.256 [2024-10-17 16:46:36.367479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:00.256 [2024-10-17 16:46:36.367494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:32:00.256 [2024-10-17 16:46:36.367505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.256 [2024-10-17 16:46:36.484015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.256 [2024-10-17 16:46:36.484084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:32:00.256 [2024-10-17 16:46:36.484109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 116.624 ms 00:32:00.256 [2024-10-17 16:46:36.484120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.256 [2024-10-17 16:46:36.522758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.256 [2024-10-17 16:46:36.522999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:32:00.256 [2024-10-17 16:46:36.523034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.578 ms 00:32:00.256 [2024-10-17 16:46:36.523045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.515 [2024-10-17 16:46:36.562340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.515 [2024-10-17 16:46:36.562410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:32:00.515 [2024-10-17 16:46:36.562430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.252 ms 00:32:00.515 [2024-10-17 16:46:36.562441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.515 [2024-10-17 16:46:36.602231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.515 [2024-10-17 16:46:36.602313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:00.515 [2024-10-17 16:46:36.602334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.784 ms 00:32:00.515 [2024-10-17 16:46:36.602345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.515 [2024-10-17 16:46:36.602423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.515 [2024-10-17 16:46:36.602435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:00.515 [2024-10-17 16:46:36.602453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:00.515 [2024-10-17 16:46:36.602463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.515 [2024-10-17 16:46:36.602594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.515 [2024-10-17 16:46:36.602606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:00.515 [2024-10-17 16:46:36.602620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:32:00.515 [2024-10-17 16:46:36.602631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.515 [2024-10-17 16:46:36.603773] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3763.349 ms, result 0 00:32:00.515 { 00:32:00.515 "name": "ftl0", 00:32:00.515 "uuid": "dadd3054-484d-499b-9d35-ce881e09f580" 00:32:00.515 } 00:32:00.515 16:46:36 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:32:00.515 16:46:36 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:32:00.775 16:46:36 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:32:00.775 16:46:36 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:00.775 [2024-10-17 16:46:37.046319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.775 [2024-10-17 16:46:37.046382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:00.775 [2024-10-17 16:46:37.046399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:00.775 [2024-10-17 16:46:37.046424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.775 [2024-10-17 16:46:37.046454] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:00.775 [2024-10-17 16:46:37.050722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.775 [2024-10-17 16:46:37.050756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:00.775 [2024-10-17 16:46:37.050772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.250 ms 00:32:00.775 [2024-10-17 16:46:37.050782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.775 [2024-10-17 16:46:37.051041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.775 [2024-10-17 16:46:37.051054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:00.775 [2024-10-17 16:46:37.051067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:32:00.775 [2024-10-17 16:46:37.051077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.775 [2024-10-17 16:46:37.053602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.775 [2024-10-17 16:46:37.053624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:00.775 [2024-10-17 16:46:37.053639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.506 ms 00:32:00.775 [2024-10-17 16:46:37.053649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:00.775 [2024-10-17 16:46:37.058675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:00.775 [2024-10-17 16:46:37.058719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:00.775 [2024-10-17 16:46:37.058735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.008 ms 00:32:00.775 [2024-10-17 16:46:37.058745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.035 [2024-10-17 16:46:37.098527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.035 [2024-10-17 16:46:37.098588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:01.035 [2024-10-17 16:46:37.098608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.756 ms 00:32:01.035 [2024-10-17 16:46:37.098618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.035 [2024-10-17 16:46:37.121971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.035 [2024-10-17 16:46:37.122052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:01.035 [2024-10-17 16:46:37.122078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.306 ms 00:32:01.035 [2024-10-17 16:46:37.122088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.035 [2024-10-17 16:46:37.122279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.035 [2024-10-17 16:46:37.122294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:01.035 [2024-10-17 16:46:37.122308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:32:01.035 [2024-10-17 16:46:37.122318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.035 [2024-10-17 16:46:37.158489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.035 [2024-10-17 16:46:37.158532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:01.035 [2024-10-17 16:46:37.158549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.203 ms 00:32:01.035 [2024-10-17 16:46:37.158559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.035 [2024-10-17 16:46:37.194573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.035 [2024-10-17 16:46:37.194629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:01.035 [2024-10-17 16:46:37.194648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.018 ms 00:32:01.035 [2024-10-17 16:46:37.194659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.035 [2024-10-17 16:46:37.231535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.035 [2024-10-17 16:46:37.231601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:01.035 [2024-10-17 16:46:37.231620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.853 ms 00:32:01.035 [2024-10-17 16:46:37.231631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.035 [2024-10-17 16:46:37.268203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.035 [2024-10-17 16:46:37.268258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:01.035 [2024-10-17 16:46:37.268276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.502 ms 00:32:01.035 [2024-10-17 16:46:37.268286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.035 [2024-10-17 16:46:37.268340] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:01.035 [2024-10-17 16:46:37.268358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:01.035 [2024-10-17 16:46:37.268568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.268992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:01.036 [2024-10-17 16:46:37.269639] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:01.036 [2024-10-17 16:46:37.269652] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dadd3054-484d-499b-9d35-ce881e09f580 00:32:01.036 [2024-10-17 16:46:37.269663] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:01.036 [2024-10-17 16:46:37.269678] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:01.036 [2024-10-17 16:46:37.269692] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:01.036 [2024-10-17 16:46:37.269715] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:01.036 [2024-10-17 16:46:37.269724] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:01.036 [2024-10-17 16:46:37.269740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:01.036 [2024-10-17 16:46:37.269750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:01.036 [2024-10-17 16:46:37.269762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:01.036 [2024-10-17 16:46:37.269770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:01.036 [2024-10-17 16:46:37.269783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.036 [2024-10-17 16:46:37.269793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:01.036 [2024-10-17 16:46:37.269806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.448 ms 00:32:01.036 [2024-10-17 16:46:37.269815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.037 [2024-10-17 16:46:37.290041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.037 [2024-10-17 16:46:37.290083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:01.037 [2024-10-17 16:46:37.290099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.195 ms 00:32:01.037 [2024-10-17 16:46:37.290109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.037 [2024-10-17 16:46:37.290660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:01.037 [2024-10-17 16:46:37.290675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:01.037 [2024-10-17 16:46:37.290688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.518 ms 00:32:01.037 [2024-10-17 16:46:37.290708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.296 [2024-10-17 16:46:37.356654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.296 [2024-10-17 16:46:37.356897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:01.296 [2024-10-17 16:46:37.356928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.296 [2024-10-17 16:46:37.356941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.296 [2024-10-17 16:46:37.357025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.296 [2024-10-17 16:46:37.357037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:01.296 [2024-10-17 16:46:37.357051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.296 [2024-10-17 16:46:37.357061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.296 [2024-10-17 16:46:37.357194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.296 [2024-10-17 16:46:37.357209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:01.296 [2024-10-17 16:46:37.357223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.296 [2024-10-17 16:46:37.357234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.296 [2024-10-17 16:46:37.357261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.296 [2024-10-17 16:46:37.357272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:01.296 [2024-10-17 16:46:37.357285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.296 [2024-10-17 16:46:37.357296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.296 [2024-10-17 16:46:37.483098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.296 [2024-10-17 16:46:37.483169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:01.296 [2024-10-17 16:46:37.483188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.296 [2024-10-17 16:46:37.483198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.296 [2024-10-17 16:46:37.585993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.296 [2024-10-17 16:46:37.586062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:01.296 [2024-10-17 16:46:37.586080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.296 [2024-10-17 16:46:37.586091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.296 [2024-10-17 16:46:37.586222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.296 [2024-10-17 16:46:37.586239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:01.296 [2024-10-17 16:46:37.586253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.296 [2024-10-17 16:46:37.586263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.296 [2024-10-17 16:46:37.586336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.296 [2024-10-17 16:46:37.586350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:01.296 [2024-10-17 16:46:37.586363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.296 [2024-10-17 16:46:37.586373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.296 [2024-10-17 16:46:37.586501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.296 [2024-10-17 16:46:37.586514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:01.296 [2024-10-17 16:46:37.586530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.296 [2024-10-17 16:46:37.586541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.296 [2024-10-17 16:46:37.586583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.297 [2024-10-17 16:46:37.586595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:01.297 [2024-10-17 16:46:37.586608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.297 [2024-10-17 16:46:37.586618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.297 [2024-10-17 16:46:37.586659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.297 [2024-10-17 16:46:37.586670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:01.297 [2024-10-17 16:46:37.586683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.297 [2024-10-17 16:46:37.586695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.297 [2024-10-17 16:46:37.586789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:01.297 [2024-10-17 16:46:37.586803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:01.297 [2024-10-17 16:46:37.586816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:01.297 [2024-10-17 16:46:37.586826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:01.297 [2024-10-17 16:46:37.586967] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 541.492 ms, result 0 00:32:01.558 true 00:32:01.558 16:46:37 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76209 00:32:01.558 16:46:37 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76209 ']' 00:32:01.558 16:46:37 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76209 00:32:01.558 16:46:37 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:32:01.558 16:46:37 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:01.558 16:46:37 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76209 00:32:01.558 killing process with pid 76209 00:32:01.558 16:46:37 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:01.558 16:46:37 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:01.558 16:46:37 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76209' 00:32:01.558 16:46:37 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 76209 00:32:01.558 16:46:37 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 76209 00:32:06.867 16:46:42 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:32:11.072 262144+0 records in 00:32:11.072 262144+0 records out 00:32:11.072 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.23182 s, 254 MB/s 00:32:11.072 16:46:46 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:12.452 16:46:48 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:12.452 [2024-10-17 16:46:48.673369] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:32:12.452 [2024-10-17 16:46:48.673679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76462 ] 00:32:12.711 [2024-10-17 16:46:48.855002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.711 [2024-10-17 16:46:48.981761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.280 [2024-10-17 16:46:49.350710] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:13.280 [2024-10-17 16:46:49.350777] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:13.280 [2024-10-17 16:46:49.520865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.280 [2024-10-17 16:46:49.520928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:13.280 [2024-10-17 16:46:49.520944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:13.280 [2024-10-17 16:46:49.520964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.280 [2024-10-17 16:46:49.521019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.280 [2024-10-17 16:46:49.521032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:13.280 [2024-10-17 16:46:49.521043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:13.280 [2024-10-17 16:46:49.521059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.280 [2024-10-17 16:46:49.521081] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:13.280 [2024-10-17 16:46:49.522054] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:13.280 [2024-10-17 16:46:49.522078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.280 [2024-10-17 16:46:49.522095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:13.280 [2024-10-17 16:46:49.522107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:32:13.280 [2024-10-17 16:46:49.522117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.280 [2024-10-17 16:46:49.523598] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:13.280 [2024-10-17 16:46:49.543708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.280 [2024-10-17 16:46:49.543756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:13.280 [2024-10-17 16:46:49.543772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.142 ms 00:32:13.280 [2024-10-17 16:46:49.543784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.280 [2024-10-17 16:46:49.543862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.280 [2024-10-17 16:46:49.543881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:13.280 [2024-10-17 16:46:49.543892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:13.280 [2024-10-17 16:46:49.543902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.280 [2024-10-17 16:46:49.551018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.280 [2024-10-17 16:46:49.551179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:13.280 [2024-10-17 16:46:49.551203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.046 ms 00:32:13.280 [2024-10-17 16:46:49.551215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.280 [2024-10-17 16:46:49.551349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.280 [2024-10-17 16:46:49.551363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:13.280 [2024-10-17 16:46:49.551374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:32:13.280 [2024-10-17 16:46:49.551384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.280 [2024-10-17 16:46:49.551432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.281 [2024-10-17 16:46:49.551445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:13.281 [2024-10-17 16:46:49.551455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:13.281 [2024-10-17 16:46:49.551465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.281 [2024-10-17 16:46:49.551494] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:13.281 [2024-10-17 16:46:49.556762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.281 [2024-10-17 16:46:49.556796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:13.281 [2024-10-17 16:46:49.556809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.284 ms 00:32:13.281 [2024-10-17 16:46:49.556820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.281 [2024-10-17 16:46:49.556861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.281 [2024-10-17 16:46:49.556873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:13.281 [2024-10-17 16:46:49.556883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:13.281 [2024-10-17 16:46:49.556894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.281 [2024-10-17 16:46:49.556950] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:13.281 [2024-10-17 16:46:49.556978] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:13.281 [2024-10-17 16:46:49.557014] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:13.281 [2024-10-17 16:46:49.557037] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:13.281 [2024-10-17 16:46:49.557127] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:13.281 [2024-10-17 16:46:49.557139] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:13.281 [2024-10-17 16:46:49.557152] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:13.281 [2024-10-17 16:46:49.557165] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:13.281 [2024-10-17 16:46:49.557178] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:13.281 [2024-10-17 16:46:49.557189] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:13.281 [2024-10-17 16:46:49.557200] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:13.281 [2024-10-17 16:46:49.557209] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:13.281 [2024-10-17 16:46:49.557219] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:13.281 [2024-10-17 16:46:49.557229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.281 [2024-10-17 16:46:49.557247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:13.281 [2024-10-17 16:46:49.557257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:32:13.281 [2024-10-17 16:46:49.557267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.281 [2024-10-17 16:46:49.557340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.281 [2024-10-17 16:46:49.557358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:13.281 [2024-10-17 16:46:49.557374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:13.281 [2024-10-17 16:46:49.557390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.281 [2024-10-17 16:46:49.557513] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:13.281 [2024-10-17 16:46:49.557534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:13.281 [2024-10-17 16:46:49.557558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:13.281 [2024-10-17 16:46:49.557574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.281 [2024-10-17 16:46:49.557592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:13.281 [2024-10-17 16:46:49.557608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:13.281 [2024-10-17 16:46:49.557626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:13.281 [2024-10-17 16:46:49.557642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:13.281 [2024-10-17 16:46:49.557658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:13.281 [2024-10-17 16:46:49.557675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:13.281 [2024-10-17 16:46:49.557691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:13.281 [2024-10-17 16:46:49.557721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:13.281 [2024-10-17 16:46:49.557738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:13.281 [2024-10-17 16:46:49.557754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:13.281 [2024-10-17 16:46:49.557771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:13.281 [2024-10-17 16:46:49.557804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.281 [2024-10-17 16:46:49.557820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:13.281 [2024-10-17 16:46:49.557837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:13.281 [2024-10-17 16:46:49.557847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.281 [2024-10-17 16:46:49.557857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:13.281 [2024-10-17 16:46:49.557867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:13.281 [2024-10-17 16:46:49.557876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.281 [2024-10-17 16:46:49.557885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:13.281 [2024-10-17 16:46:49.557895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:13.281 [2024-10-17 16:46:49.557904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.281 [2024-10-17 16:46:49.557913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:13.281 [2024-10-17 16:46:49.557922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:13.281 [2024-10-17 16:46:49.557932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.281 [2024-10-17 16:46:49.557940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:13.281 [2024-10-17 16:46:49.557950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:13.281 [2024-10-17 16:46:49.557959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:13.281 [2024-10-17 16:46:49.557968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:13.281 [2024-10-17 16:46:49.557977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:13.281 [2024-10-17 16:46:49.557986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:13.281 [2024-10-17 16:46:49.557995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:13.281 [2024-10-17 16:46:49.558005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:13.281 [2024-10-17 16:46:49.558014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:13.281 [2024-10-17 16:46:49.558023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:13.281 [2024-10-17 16:46:49.558032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:13.281 [2024-10-17 16:46:49.558042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.281 [2024-10-17 16:46:49.558051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:13.281 [2024-10-17 16:46:49.558060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:13.281 [2024-10-17 16:46:49.558068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.281 [2024-10-17 16:46:49.558077] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:13.281 [2024-10-17 16:46:49.558087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:13.281 [2024-10-17 16:46:49.558097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:13.281 [2024-10-17 16:46:49.558106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:13.281 [2024-10-17 16:46:49.558116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:13.281 [2024-10-17 16:46:49.558125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:13.281 [2024-10-17 16:46:49.558134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:13.281 [2024-10-17 16:46:49.558143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:13.281 [2024-10-17 16:46:49.558152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:13.281 [2024-10-17 16:46:49.558161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:13.281 [2024-10-17 16:46:49.558172] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:13.281 [2024-10-17 16:46:49.558184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:13.281 [2024-10-17 16:46:49.558197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:13.281 [2024-10-17 16:46:49.558208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:13.281 [2024-10-17 16:46:49.558218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:13.281 [2024-10-17 16:46:49.558228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:13.281 [2024-10-17 16:46:49.558239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:13.281 [2024-10-17 16:46:49.558249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:13.281 [2024-10-17 16:46:49.558260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:13.281 [2024-10-17 16:46:49.558270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:13.281 [2024-10-17 16:46:49.558280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:13.281 [2024-10-17 16:46:49.558291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:13.281 [2024-10-17 16:46:49.558301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:13.281 [2024-10-17 16:46:49.558311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:13.281 [2024-10-17 16:46:49.558322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:13.281 [2024-10-17 16:46:49.558333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:13.281 [2024-10-17 16:46:49.558343] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:13.281 [2024-10-17 16:46:49.558354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:13.281 [2024-10-17 16:46:49.558373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:13.282 [2024-10-17 16:46:49.558384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:13.282 [2024-10-17 16:46:49.558395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:13.282 [2024-10-17 16:46:49.558405] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:13.282 [2024-10-17 16:46:49.558416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.282 [2024-10-17 16:46:49.558427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:13.282 [2024-10-17 16:46:49.558437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:32:13.282 [2024-10-17 16:46:49.558447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.542 [2024-10-17 16:46:49.600665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.542 [2024-10-17 16:46:49.600719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:13.542 [2024-10-17 16:46:49.600735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.226 ms 00:32:13.542 [2024-10-17 16:46:49.600780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.542 [2024-10-17 16:46:49.600893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.542 [2024-10-17 16:46:49.600913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:13.542 [2024-10-17 16:46:49.600925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:13.542 [2024-10-17 16:46:49.600937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.542 [2024-10-17 16:46:49.661244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.542 [2024-10-17 16:46:49.661296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:13.542 [2024-10-17 16:46:49.661312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.321 ms 00:32:13.542 [2024-10-17 16:46:49.661322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.542 [2024-10-17 16:46:49.661383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.542 [2024-10-17 16:46:49.661396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:13.542 [2024-10-17 16:46:49.661407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:13.542 [2024-10-17 16:46:49.661418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.542 [2024-10-17 16:46:49.661944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.542 [2024-10-17 16:46:49.661965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:13.542 [2024-10-17 16:46:49.661976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:32:13.542 [2024-10-17 16:46:49.661986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.542 [2024-10-17 16:46:49.662114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.542 [2024-10-17 16:46:49.662130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:13.542 [2024-10-17 16:46:49.662141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:32:13.542 [2024-10-17 16:46:49.662151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.542 [2024-10-17 16:46:49.681995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.542 [2024-10-17 16:46:49.682036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:13.542 [2024-10-17 16:46:49.682051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.850 ms 00:32:13.542 [2024-10-17 16:46:49.682068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.542 [2024-10-17 16:46:49.702057] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:13.542 [2024-10-17 16:46:49.702284] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:13.542 [2024-10-17 16:46:49.702313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.542 [2024-10-17 16:46:49.702324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:13.542 [2024-10-17 16:46:49.702337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.135 ms 00:32:13.542 [2024-10-17 16:46:49.702347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.542 [2024-10-17 16:46:49.733707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.542 [2024-10-17 16:46:49.733775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:13.542 [2024-10-17 16:46:49.733792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.299 ms 00:32:13.542 [2024-10-17 16:46:49.733821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.542 [2024-10-17 16:46:49.752307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.542 [2024-10-17 16:46:49.752372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:13.542 [2024-10-17 16:46:49.752393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.444 ms 00:32:13.542 [2024-10-17 16:46:49.752403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.542 [2024-10-17 16:46:49.770440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.542 [2024-10-17 16:46:49.770485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:13.542 [2024-10-17 16:46:49.770499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.017 ms 00:32:13.542 [2024-10-17 16:46:49.770509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.542 [2024-10-17 16:46:49.771413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.542 [2024-10-17 16:46:49.771446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:13.542 [2024-10-17 16:46:49.771459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:32:13.542 [2024-10-17 16:46:49.771469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.802 [2024-10-17 16:46:49.858051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.802 [2024-10-17 16:46:49.858126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:13.802 [2024-10-17 16:46:49.858142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.698 ms 00:32:13.802 [2024-10-17 16:46:49.858154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.802 [2024-10-17 16:46:49.869479] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:13.802 [2024-10-17 16:46:49.872812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.802 [2024-10-17 16:46:49.872843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:13.802 [2024-10-17 16:46:49.872857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.580 ms 00:32:13.802 [2024-10-17 16:46:49.872868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.802 [2024-10-17 16:46:49.872984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.802 [2024-10-17 16:46:49.872998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:13.802 [2024-10-17 16:46:49.873010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:13.802 [2024-10-17 16:46:49.873020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.802 [2024-10-17 16:46:49.873110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.802 [2024-10-17 16:46:49.873131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:13.802 [2024-10-17 16:46:49.873143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:13.802 [2024-10-17 16:46:49.873153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.802 [2024-10-17 16:46:49.873179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.802 [2024-10-17 16:46:49.873191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:13.802 [2024-10-17 16:46:49.873201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:13.802 [2024-10-17 16:46:49.873211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.802 [2024-10-17 16:46:49.873250] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:13.802 [2024-10-17 16:46:49.873263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.802 [2024-10-17 16:46:49.873273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:13.802 [2024-10-17 16:46:49.873290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:13.802 [2024-10-17 16:46:49.873300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.802 [2024-10-17 16:46:49.910713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.802 [2024-10-17 16:46:49.910755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:13.802 [2024-10-17 16:46:49.910770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.450 ms 00:32:13.802 [2024-10-17 16:46:49.910781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.802 [2024-10-17 16:46:49.910866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.802 [2024-10-17 16:46:49.910884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:13.802 [2024-10-17 16:46:49.910895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:32:13.802 [2024-10-17 16:46:49.910905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.802 [2024-10-17 16:46:49.912096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 391.312 ms, result 0 00:32:14.738  [2024-10-17T16:46:51.972Z] Copying: 29/1024 [MB] (29 MBps) [2024-10-17T16:46:53.348Z] Copying: 59/1024 [MB] (30 MBps) [2024-10-17T16:46:54.285Z] Copying: 89/1024 [MB] (29 MBps) [2024-10-17T16:46:55.222Z] Copying: 116/1024 [MB] (27 MBps) [2024-10-17T16:46:56.159Z] Copying: 144/1024 [MB] (27 MBps) [2024-10-17T16:46:57.096Z] Copying: 172/1024 [MB] (27 MBps) [2024-10-17T16:46:58.073Z] Copying: 199/1024 [MB] (27 MBps) [2024-10-17T16:46:59.008Z] Copying: 226/1024 [MB] (26 MBps) [2024-10-17T16:46:59.943Z] Copying: 255/1024 [MB] (28 MBps) [2024-10-17T16:47:01.320Z] Copying: 283/1024 [MB] (28 MBps) [2024-10-17T16:47:02.257Z] Copying: 313/1024 [MB] (29 MBps) [2024-10-17T16:47:03.270Z] Copying: 343/1024 [MB] (30 MBps) [2024-10-17T16:47:04.206Z] Copying: 373/1024 [MB] (29 MBps) [2024-10-17T16:47:05.142Z] Copying: 402/1024 [MB] (28 MBps) [2024-10-17T16:47:06.079Z] Copying: 430/1024 [MB] (28 MBps) [2024-10-17T16:47:07.016Z] Copying: 457/1024 [MB] (27 MBps) [2024-10-17T16:47:07.953Z] Copying: 484/1024 [MB] (26 MBps) [2024-10-17T16:47:09.329Z] Copying: 511/1024 [MB] (27 MBps) [2024-10-17T16:47:09.897Z] Copying: 538/1024 [MB] (27 MBps) [2024-10-17T16:47:11.274Z] Copying: 566/1024 [MB] (27 MBps) [2024-10-17T16:47:12.210Z] Copying: 594/1024 [MB] (27 MBps) [2024-10-17T16:47:13.168Z] Copying: 622/1024 [MB] (27 MBps) [2024-10-17T16:47:14.104Z] Copying: 651/1024 [MB] (28 MBps) [2024-10-17T16:47:15.050Z] Copying: 679/1024 [MB] (28 MBps) [2024-10-17T16:47:15.986Z] Copying: 708/1024 [MB] (29 MBps) [2024-10-17T16:47:16.923Z] Copying: 738/1024 [MB] (29 MBps) [2024-10-17T16:47:17.944Z] Copying: 766/1024 [MB] (28 MBps) [2024-10-17T16:47:19.321Z] Copying: 794/1024 [MB] (27 MBps) [2024-10-17T16:47:19.889Z] Copying: 822/1024 [MB] (27 MBps) [2024-10-17T16:47:21.265Z] Copying: 849/1024 [MB] (27 MBps) [2024-10-17T16:47:22.203Z] Copying: 878/1024 [MB] (28 MBps) [2024-10-17T16:47:23.140Z] Copying: 906/1024 [MB] (28 MBps) [2024-10-17T16:47:24.077Z] Copying: 933/1024 [MB] (27 MBps) [2024-10-17T16:47:25.014Z] Copying: 960/1024 [MB] (27 MBps) [2024-10-17T16:47:25.951Z] Copying: 988/1024 [MB] (27 MBps) [2024-10-17T16:47:26.210Z] Copying: 1015/1024 [MB] (26 MBps) [2024-10-17T16:47:26.210Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-10-17 16:47:26.197821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.911 [2024-10-17 16:47:26.197990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:49.911 [2024-10-17 16:47:26.198084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:49.911 [2024-10-17 16:47:26.198123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.911 [2024-10-17 16:47:26.198224] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:49.911 [2024-10-17 16:47:26.202492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.911 [2024-10-17 16:47:26.202628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:49.911 [2024-10-17 16:47:26.202772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.212 ms 00:32:49.911 [2024-10-17 16:47:26.202811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:49.911 [2024-10-17 16:47:26.204807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:49.911 [2024-10-17 16:47:26.204959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:49.911 [2024-10-17 16:47:26.205044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.947 ms 00:32:49.911 [2024-10-17 16:47:26.205082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.221 [2024-10-17 16:47:26.222888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.221 [2024-10-17 16:47:26.223047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:50.221 [2024-10-17 16:47:26.223172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.791 ms 00:32:50.221 [2024-10-17 16:47:26.223210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.221 [2024-10-17 16:47:26.228300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.221 [2024-10-17 16:47:26.228439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:50.221 [2024-10-17 16:47:26.228559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.040 ms 00:32:50.221 [2024-10-17 16:47:26.228597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.221 [2024-10-17 16:47:26.266242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.221 [2024-10-17 16:47:26.266294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:50.221 [2024-10-17 16:47:26.266309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.630 ms 00:32:50.221 [2024-10-17 16:47:26.266320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.221 [2024-10-17 16:47:26.287579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.221 [2024-10-17 16:47:26.287626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:50.221 [2024-10-17 16:47:26.287641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.251 ms 00:32:50.221 [2024-10-17 16:47:26.287652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.221 [2024-10-17 16:47:26.287796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.221 [2024-10-17 16:47:26.287812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:50.221 [2024-10-17 16:47:26.287823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:32:50.221 [2024-10-17 16:47:26.287845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.221 [2024-10-17 16:47:26.324240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.221 [2024-10-17 16:47:26.324278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:50.221 [2024-10-17 16:47:26.324292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.437 ms 00:32:50.221 [2024-10-17 16:47:26.324318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.221 [2024-10-17 16:47:26.361220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.221 [2024-10-17 16:47:26.361276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:50.221 [2024-10-17 16:47:26.361315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.919 ms 00:32:50.221 [2024-10-17 16:47:26.361326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.221 [2024-10-17 16:47:26.397841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.221 [2024-10-17 16:47:26.398073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:50.221 [2024-10-17 16:47:26.398097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.528 ms 00:32:50.221 [2024-10-17 16:47:26.398108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.221 [2024-10-17 16:47:26.434637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.221 [2024-10-17 16:47:26.434696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:50.221 [2024-10-17 16:47:26.434739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.415 ms 00:32:50.221 [2024-10-17 16:47:26.434749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.221 [2024-10-17 16:47:26.434796] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:50.221 [2024-10-17 16:47:26.434816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:50.221 [2024-10-17 16:47:26.434829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:50.221 [2024-10-17 16:47:26.434856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:50.221 [2024-10-17 16:47:26.434868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.434879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.434889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.434901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.434912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.434922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.434933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.434944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.434954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.434965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.434975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.434986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.434997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:50.222 [2024-10-17 16:47:26.435636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:50.223 [2024-10-17 16:47:26.435919] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:50.223 [2024-10-17 16:47:26.435929] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dadd3054-484d-499b-9d35-ce881e09f580 00:32:50.223 [2024-10-17 16:47:26.435948] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:50.223 [2024-10-17 16:47:26.435957] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:50.223 [2024-10-17 16:47:26.435973] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:50.223 [2024-10-17 16:47:26.435984] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:50.223 [2024-10-17 16:47:26.435994] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:50.223 [2024-10-17 16:47:26.436005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:50.223 [2024-10-17 16:47:26.436015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:50.223 [2024-10-17 16:47:26.436038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:50.223 [2024-10-17 16:47:26.436047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:50.223 [2024-10-17 16:47:26.436057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.223 [2024-10-17 16:47:26.436068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:50.223 [2024-10-17 16:47:26.436078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.264 ms 00:32:50.223 [2024-10-17 16:47:26.436088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.223 [2024-10-17 16:47:26.456138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.223 [2024-10-17 16:47:26.456328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:50.223 [2024-10-17 16:47:26.456351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.007 ms 00:32:50.223 [2024-10-17 16:47:26.456361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.223 [2024-10-17 16:47:26.456945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:50.223 [2024-10-17 16:47:26.456959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:50.223 [2024-10-17 16:47:26.456970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:32:50.223 [2024-10-17 16:47:26.456980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.487 [2024-10-17 16:47:26.508986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.487 [2024-10-17 16:47:26.509039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:50.487 [2024-10-17 16:47:26.509054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.487 [2024-10-17 16:47:26.509064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.487 [2024-10-17 16:47:26.509133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.487 [2024-10-17 16:47:26.509144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:50.487 [2024-10-17 16:47:26.509155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.487 [2024-10-17 16:47:26.509165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.487 [2024-10-17 16:47:26.509248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.487 [2024-10-17 16:47:26.509261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:50.487 [2024-10-17 16:47:26.509272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.487 [2024-10-17 16:47:26.509282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.487 [2024-10-17 16:47:26.509299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.487 [2024-10-17 16:47:26.509310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:50.487 [2024-10-17 16:47:26.509320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.487 [2024-10-17 16:47:26.509330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.487 [2024-10-17 16:47:26.634432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.487 [2024-10-17 16:47:26.634500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:50.487 [2024-10-17 16:47:26.634515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.487 [2024-10-17 16:47:26.634526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.487 [2024-10-17 16:47:26.736001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.487 [2024-10-17 16:47:26.736054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:50.487 [2024-10-17 16:47:26.736068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.487 [2024-10-17 16:47:26.736079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.487 [2024-10-17 16:47:26.736173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.487 [2024-10-17 16:47:26.736186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:50.487 [2024-10-17 16:47:26.736202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.487 [2024-10-17 16:47:26.736213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.487 [2024-10-17 16:47:26.736263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.487 [2024-10-17 16:47:26.736275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:50.487 [2024-10-17 16:47:26.736285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.487 [2024-10-17 16:47:26.736296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.487 [2024-10-17 16:47:26.736417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.487 [2024-10-17 16:47:26.736431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:50.487 [2024-10-17 16:47:26.736446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.487 [2024-10-17 16:47:26.736456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.487 [2024-10-17 16:47:26.736492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.487 [2024-10-17 16:47:26.736505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:50.487 [2024-10-17 16:47:26.736515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.487 [2024-10-17 16:47:26.736525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.487 [2024-10-17 16:47:26.736564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.487 [2024-10-17 16:47:26.736575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:50.487 [2024-10-17 16:47:26.736586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.487 [2024-10-17 16:47:26.736600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.487 [2024-10-17 16:47:26.736640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:50.487 [2024-10-17 16:47:26.736652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:50.487 [2024-10-17 16:47:26.736662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:50.487 [2024-10-17 16:47:26.736673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:50.487 [2024-10-17 16:47:26.736806] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 539.827 ms, result 0 00:32:51.885 00:32:51.885 00:32:51.885 16:47:27 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:51.885 [2024-10-17 16:47:27.946641] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:32:51.885 [2024-10-17 16:47:27.946792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76858 ] 00:32:51.885 [2024-10-17 16:47:28.121580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.145 [2024-10-17 16:47:28.240401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.405 [2024-10-17 16:47:28.602574] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:52.405 [2024-10-17 16:47:28.602649] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:52.665 [2024-10-17 16:47:28.763869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.665 [2024-10-17 16:47:28.763934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:52.665 [2024-10-17 16:47:28.763950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:52.665 [2024-10-17 16:47:28.763965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.665 [2024-10-17 16:47:28.764018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.665 [2024-10-17 16:47:28.764031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:52.665 [2024-10-17 16:47:28.764042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:32:52.665 [2024-10-17 16:47:28.764055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.665 [2024-10-17 16:47:28.764078] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:52.665 [2024-10-17 16:47:28.765060] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:52.665 [2024-10-17 16:47:28.765084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.665 [2024-10-17 16:47:28.765098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:52.665 [2024-10-17 16:47:28.765110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:32:52.665 [2024-10-17 16:47:28.765120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.665 [2024-10-17 16:47:28.766586] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:52.665 [2024-10-17 16:47:28.785839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.665 [2024-10-17 16:47:28.785881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:52.665 [2024-10-17 16:47:28.785896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.286 ms 00:32:52.665 [2024-10-17 16:47:28.785908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.665 [2024-10-17 16:47:28.785975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.665 [2024-10-17 16:47:28.785991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:52.665 [2024-10-17 16:47:28.786002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:32:52.665 [2024-10-17 16:47:28.786013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.665 [2024-10-17 16:47:28.792867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.665 [2024-10-17 16:47:28.793032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:52.665 [2024-10-17 16:47:28.793053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.791 ms 00:32:52.665 [2024-10-17 16:47:28.793065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.665 [2024-10-17 16:47:28.793158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.665 [2024-10-17 16:47:28.793171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:52.665 [2024-10-17 16:47:28.793182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:52.665 [2024-10-17 16:47:28.793193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.665 [2024-10-17 16:47:28.793240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.665 [2024-10-17 16:47:28.793252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:52.665 [2024-10-17 16:47:28.793263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:52.665 [2024-10-17 16:47:28.793273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.665 [2024-10-17 16:47:28.793298] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:52.665 [2024-10-17 16:47:28.798239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.665 [2024-10-17 16:47:28.798272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:52.665 [2024-10-17 16:47:28.798284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.956 ms 00:32:52.665 [2024-10-17 16:47:28.798295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.665 [2024-10-17 16:47:28.798330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.665 [2024-10-17 16:47:28.798341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:52.665 [2024-10-17 16:47:28.798352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:32:52.665 [2024-10-17 16:47:28.798362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.665 [2024-10-17 16:47:28.798418] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:52.665 [2024-10-17 16:47:28.798443] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:52.665 [2024-10-17 16:47:28.798480] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:52.665 [2024-10-17 16:47:28.798501] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:52.665 [2024-10-17 16:47:28.798589] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:52.665 [2024-10-17 16:47:28.798603] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:52.665 [2024-10-17 16:47:28.798616] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:52.665 [2024-10-17 16:47:28.798629] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:52.665 [2024-10-17 16:47:28.798641] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:52.665 [2024-10-17 16:47:28.798653] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:52.665 [2024-10-17 16:47:28.798664] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:52.665 [2024-10-17 16:47:28.798674] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:52.665 [2024-10-17 16:47:28.798684] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:52.665 [2024-10-17 16:47:28.798695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.665 [2024-10-17 16:47:28.798727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:52.665 [2024-10-17 16:47:28.798738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:32:52.665 [2024-10-17 16:47:28.798748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.665 [2024-10-17 16:47:28.798820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.665 [2024-10-17 16:47:28.798832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:52.666 [2024-10-17 16:47:28.798842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:52.666 [2024-10-17 16:47:28.798852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.666 [2024-10-17 16:47:28.798947] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:52.666 [2024-10-17 16:47:28.798963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:52.666 [2024-10-17 16:47:28.798977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:52.666 [2024-10-17 16:47:28.798988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.666 [2024-10-17 16:47:28.798999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:52.666 [2024-10-17 16:47:28.799009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:52.666 [2024-10-17 16:47:28.799019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:52.666 [2024-10-17 16:47:28.799028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:52.666 [2024-10-17 16:47:28.799038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:52.666 [2024-10-17 16:47:28.799047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:52.666 [2024-10-17 16:47:28.799058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:52.666 [2024-10-17 16:47:28.799067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:52.666 [2024-10-17 16:47:28.799076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:52.666 [2024-10-17 16:47:28.799086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:52.666 [2024-10-17 16:47:28.799096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:52.666 [2024-10-17 16:47:28.799114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.666 [2024-10-17 16:47:28.799124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:52.666 [2024-10-17 16:47:28.799133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:52.666 [2024-10-17 16:47:28.799143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.666 [2024-10-17 16:47:28.799153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:52.666 [2024-10-17 16:47:28.799162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:52.666 [2024-10-17 16:47:28.799172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:52.666 [2024-10-17 16:47:28.799182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:52.666 [2024-10-17 16:47:28.799191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:52.666 [2024-10-17 16:47:28.799200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:52.666 [2024-10-17 16:47:28.799209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:52.666 [2024-10-17 16:47:28.799218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:52.666 [2024-10-17 16:47:28.799228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:52.666 [2024-10-17 16:47:28.799237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:52.666 [2024-10-17 16:47:28.799247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:52.666 [2024-10-17 16:47:28.799256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:52.666 [2024-10-17 16:47:28.799265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:52.666 [2024-10-17 16:47:28.799274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:52.666 [2024-10-17 16:47:28.799283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:52.666 [2024-10-17 16:47:28.799292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:52.666 [2024-10-17 16:47:28.799301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:52.666 [2024-10-17 16:47:28.799310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:52.666 [2024-10-17 16:47:28.799320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:52.666 [2024-10-17 16:47:28.799329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:52.666 [2024-10-17 16:47:28.799338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.666 [2024-10-17 16:47:28.799347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:52.666 [2024-10-17 16:47:28.799356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:52.666 [2024-10-17 16:47:28.799366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.666 [2024-10-17 16:47:28.799376] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:52.666 [2024-10-17 16:47:28.799386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:52.666 [2024-10-17 16:47:28.799395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:52.666 [2024-10-17 16:47:28.799405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:52.666 [2024-10-17 16:47:28.799414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:52.666 [2024-10-17 16:47:28.799424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:52.666 [2024-10-17 16:47:28.799433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:52.666 [2024-10-17 16:47:28.799443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:52.666 [2024-10-17 16:47:28.799452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:52.666 [2024-10-17 16:47:28.799462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:52.666 [2024-10-17 16:47:28.799472] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:52.666 [2024-10-17 16:47:28.799484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:52.666 [2024-10-17 16:47:28.799496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:52.666 [2024-10-17 16:47:28.799506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:52.666 [2024-10-17 16:47:28.799517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:52.666 [2024-10-17 16:47:28.799527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:52.666 [2024-10-17 16:47:28.799537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:52.666 [2024-10-17 16:47:28.799547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:52.666 [2024-10-17 16:47:28.799557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:52.666 [2024-10-17 16:47:28.799568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:52.666 [2024-10-17 16:47:28.799578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:52.666 [2024-10-17 16:47:28.799589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:52.666 [2024-10-17 16:47:28.799599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:52.666 [2024-10-17 16:47:28.799609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:52.666 [2024-10-17 16:47:28.799620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:52.666 [2024-10-17 16:47:28.799630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:52.666 [2024-10-17 16:47:28.799641] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:52.666 [2024-10-17 16:47:28.799653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:52.666 [2024-10-17 16:47:28.799667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:52.666 [2024-10-17 16:47:28.799677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:52.666 [2024-10-17 16:47:28.799687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:52.666 [2024-10-17 16:47:28.799710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:52.666 [2024-10-17 16:47:28.799721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.666 [2024-10-17 16:47:28.799732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:52.666 [2024-10-17 16:47:28.799742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:32:52.666 [2024-10-17 16:47:28.799752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.666 [2024-10-17 16:47:28.841409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.666 [2024-10-17 16:47:28.841468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:52.666 [2024-10-17 16:47:28.841484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.674 ms 00:32:52.666 [2024-10-17 16:47:28.841495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.666 [2024-10-17 16:47:28.841594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.666 [2024-10-17 16:47:28.841610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:52.666 [2024-10-17 16:47:28.841621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:52.666 [2024-10-17 16:47:28.841632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.666 [2024-10-17 16:47:28.902614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.666 [2024-10-17 16:47:28.902662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:52.666 [2024-10-17 16:47:28.902685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.984 ms 00:32:52.666 [2024-10-17 16:47:28.902716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.666 [2024-10-17 16:47:28.902781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.666 [2024-10-17 16:47:28.902792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:52.666 [2024-10-17 16:47:28.902804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:52.666 [2024-10-17 16:47:28.902814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.666 [2024-10-17 16:47:28.903330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.667 [2024-10-17 16:47:28.903345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:52.667 [2024-10-17 16:47:28.903356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:32:52.667 [2024-10-17 16:47:28.903367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.667 [2024-10-17 16:47:28.903491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.667 [2024-10-17 16:47:28.903506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:52.667 [2024-10-17 16:47:28.903517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:32:52.667 [2024-10-17 16:47:28.903527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.667 [2024-10-17 16:47:28.921310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.667 [2024-10-17 16:47:28.921354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:52.667 [2024-10-17 16:47:28.921369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.785 ms 00:32:52.667 [2024-10-17 16:47:28.921384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.667 [2024-10-17 16:47:28.940100] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:52.667 [2024-10-17 16:47:28.940142] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:52.667 [2024-10-17 16:47:28.940158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.667 [2024-10-17 16:47:28.940170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:52.667 [2024-10-17 16:47:28.940182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.667 ms 00:32:52.667 [2024-10-17 16:47:28.940193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.926 [2024-10-17 16:47:28.970438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.926 [2024-10-17 16:47:28.970492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:52.926 [2024-10-17 16:47:28.970516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.246 ms 00:32:52.926 [2024-10-17 16:47:28.970527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.926 [2024-10-17 16:47:28.989526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.926 [2024-10-17 16:47:28.989574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:52.926 [2024-10-17 16:47:28.989589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.972 ms 00:32:52.926 [2024-10-17 16:47:28.989599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.926 [2024-10-17 16:47:29.008759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.926 [2024-10-17 16:47:29.008815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:52.926 [2024-10-17 16:47:29.008830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.134 ms 00:32:52.926 [2024-10-17 16:47:29.008841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.926 [2024-10-17 16:47:29.009641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.926 [2024-10-17 16:47:29.009676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:52.926 [2024-10-17 16:47:29.009689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:32:52.926 [2024-10-17 16:47:29.009717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.926 [2024-10-17 16:47:29.096454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.926 [2024-10-17 16:47:29.096527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:52.926 [2024-10-17 16:47:29.096545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.840 ms 00:32:52.926 [2024-10-17 16:47:29.096562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.926 [2024-10-17 16:47:29.107668] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:52.926 [2024-10-17 16:47:29.110753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.926 [2024-10-17 16:47:29.110785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:52.926 [2024-10-17 16:47:29.110800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.146 ms 00:32:52.926 [2024-10-17 16:47:29.110812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.926 [2024-10-17 16:47:29.110917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.926 [2024-10-17 16:47:29.110931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:52.926 [2024-10-17 16:47:29.110943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:52.926 [2024-10-17 16:47:29.110954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.926 [2024-10-17 16:47:29.111050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.926 [2024-10-17 16:47:29.111063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:52.926 [2024-10-17 16:47:29.111074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:32:52.926 [2024-10-17 16:47:29.111085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.926 [2024-10-17 16:47:29.111109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.926 [2024-10-17 16:47:29.111120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:52.927 [2024-10-17 16:47:29.111130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:52.927 [2024-10-17 16:47:29.111140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.927 [2024-10-17 16:47:29.111173] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:52.927 [2024-10-17 16:47:29.111184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.927 [2024-10-17 16:47:29.111198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:52.927 [2024-10-17 16:47:29.111208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:52.927 [2024-10-17 16:47:29.111218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.927 [2024-10-17 16:47:29.148395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.927 [2024-10-17 16:47:29.148448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:52.927 [2024-10-17 16:47:29.148463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.206 ms 00:32:52.927 [2024-10-17 16:47:29.148475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.927 [2024-10-17 16:47:29.148577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:52.927 [2024-10-17 16:47:29.148591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:52.927 [2024-10-17 16:47:29.148602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:32:52.927 [2024-10-17 16:47:29.148613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:52.927 [2024-10-17 16:47:29.149775] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.068 ms, result 0 00:32:54.304  [2024-10-17T16:47:31.540Z] Copying: 30/1024 [MB] (30 MBps) [2024-10-17T16:47:32.476Z] Copying: 60/1024 [MB] (30 MBps) [2024-10-17T16:47:33.412Z] Copying: 88/1024 [MB] (28 MBps) [2024-10-17T16:47:34.789Z] Copying: 117/1024 [MB] (28 MBps) [2024-10-17T16:47:35.726Z] Copying: 145/1024 [MB] (28 MBps) [2024-10-17T16:47:36.661Z] Copying: 174/1024 [MB] (28 MBps) [2024-10-17T16:47:37.594Z] Copying: 202/1024 [MB] (27 MBps) [2024-10-17T16:47:38.529Z] Copying: 230/1024 [MB] (28 MBps) [2024-10-17T16:47:39.465Z] Copying: 259/1024 [MB] (28 MBps) [2024-10-17T16:47:40.438Z] Copying: 288/1024 [MB] (28 MBps) [2024-10-17T16:47:41.384Z] Copying: 316/1024 [MB] (28 MBps) [2024-10-17T16:47:42.760Z] Copying: 345/1024 [MB] (29 MBps) [2024-10-17T16:47:43.696Z] Copying: 374/1024 [MB] (28 MBps) [2024-10-17T16:47:44.670Z] Copying: 404/1024 [MB] (29 MBps) [2024-10-17T16:47:45.607Z] Copying: 432/1024 [MB] (28 MBps) [2024-10-17T16:47:46.543Z] Copying: 460/1024 [MB] (27 MBps) [2024-10-17T16:47:47.480Z] Copying: 488/1024 [MB] (27 MBps) [2024-10-17T16:47:48.480Z] Copying: 516/1024 [MB] (28 MBps) [2024-10-17T16:47:49.417Z] Copying: 545/1024 [MB] (29 MBps) [2024-10-17T16:47:50.354Z] Copying: 576/1024 [MB] (30 MBps) [2024-10-17T16:47:51.731Z] Copying: 605/1024 [MB] (28 MBps) [2024-10-17T16:47:52.668Z] Copying: 634/1024 [MB] (28 MBps) [2024-10-17T16:47:53.651Z] Copying: 662/1024 [MB] (28 MBps) [2024-10-17T16:47:54.588Z] Copying: 690/1024 [MB] (28 MBps) [2024-10-17T16:47:55.524Z] Copying: 719/1024 [MB] (28 MBps) [2024-10-17T16:47:56.461Z] Copying: 748/1024 [MB] (29 MBps) [2024-10-17T16:47:57.399Z] Copying: 778/1024 [MB] (29 MBps) [2024-10-17T16:47:58.337Z] Copying: 807/1024 [MB] (29 MBps) [2024-10-17T16:47:59.716Z] Copying: 836/1024 [MB] (29 MBps) [2024-10-17T16:48:00.655Z] Copying: 866/1024 [MB] (29 MBps) [2024-10-17T16:48:01.591Z] Copying: 897/1024 [MB] (30 MBps) [2024-10-17T16:48:02.576Z] Copying: 928/1024 [MB] (31 MBps) [2024-10-17T16:48:03.512Z] Copying: 959/1024 [MB] (30 MBps) [2024-10-17T16:48:04.449Z] Copying: 990/1024 [MB] (30 MBps) [2024-10-17T16:48:04.707Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-10-17 16:48:04.675654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.408 [2024-10-17 16:48:04.675746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:28.408 [2024-10-17 16:48:04.675766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:28.408 [2024-10-17 16:48:04.675780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.408 [2024-10-17 16:48:04.675817] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:28.408 [2024-10-17 16:48:04.680707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.408 [2024-10-17 16:48:04.680777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:28.408 [2024-10-17 16:48:04.680791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.861 ms 00:33:28.408 [2024-10-17 16:48:04.680803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.408 [2024-10-17 16:48:04.681074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.408 [2024-10-17 16:48:04.681096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:28.408 [2024-10-17 16:48:04.681109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:33:28.408 [2024-10-17 16:48:04.681120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.408 [2024-10-17 16:48:04.684076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.408 [2024-10-17 16:48:04.684102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:28.408 [2024-10-17 16:48:04.684114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.945 ms 00:33:28.408 [2024-10-17 16:48:04.684125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.408 [2024-10-17 16:48:04.689768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.408 [2024-10-17 16:48:04.689832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:28.408 [2024-10-17 16:48:04.689846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.630 ms 00:33:28.408 [2024-10-17 16:48:04.689857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.669 [2024-10-17 16:48:04.729350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.669 [2024-10-17 16:48:04.729406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:28.669 [2024-10-17 16:48:04.729423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.455 ms 00:33:28.669 [2024-10-17 16:48:04.729435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.669 [2024-10-17 16:48:04.750744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.669 [2024-10-17 16:48:04.750789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:28.669 [2024-10-17 16:48:04.750804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.307 ms 00:33:28.669 [2024-10-17 16:48:04.750815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.669 [2024-10-17 16:48:04.750975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.669 [2024-10-17 16:48:04.750992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:28.669 [2024-10-17 16:48:04.751012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:33:28.669 [2024-10-17 16:48:04.751023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.669 [2024-10-17 16:48:04.788087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.669 [2024-10-17 16:48:04.788131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:33:28.669 [2024-10-17 16:48:04.788146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.106 ms 00:33:28.669 [2024-10-17 16:48:04.788157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.669 [2024-10-17 16:48:04.824475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.669 [2024-10-17 16:48:04.824531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:33:28.669 [2024-10-17 16:48:04.824546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.349 ms 00:33:28.669 [2024-10-17 16:48:04.824557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.669 [2024-10-17 16:48:04.860171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.669 [2024-10-17 16:48:04.860213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:28.669 [2024-10-17 16:48:04.860228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.644 ms 00:33:28.669 [2024-10-17 16:48:04.860240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.669 [2024-10-17 16:48:04.896489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.669 [2024-10-17 16:48:04.896533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:28.669 [2024-10-17 16:48:04.896548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.238 ms 00:33:28.669 [2024-10-17 16:48:04.896559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.669 [2024-10-17 16:48:04.896585] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:28.669 [2024-10-17 16:48:04.896603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.896996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.897007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.897018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.897028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.897039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.897050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.897061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.897072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:28.669 [2024-10-17 16:48:04.897083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:28.670 [2024-10-17 16:48:04.897744] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:28.670 [2024-10-17 16:48:04.897759] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dadd3054-484d-499b-9d35-ce881e09f580 00:33:28.670 [2024-10-17 16:48:04.897770] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:28.670 [2024-10-17 16:48:04.897783] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:33:28.670 [2024-10-17 16:48:04.897794] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:28.670 [2024-10-17 16:48:04.897804] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:28.670 [2024-10-17 16:48:04.897814] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:28.670 [2024-10-17 16:48:04.897825] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:28.670 [2024-10-17 16:48:04.897845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:28.670 [2024-10-17 16:48:04.897855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:28.670 [2024-10-17 16:48:04.897864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:28.670 [2024-10-17 16:48:04.897873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.670 [2024-10-17 16:48:04.897884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:28.670 [2024-10-17 16:48:04.897895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:33:28.670 [2024-10-17 16:48:04.897905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.670 [2024-10-17 16:48:04.917935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.670 [2024-10-17 16:48:04.917973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:28.670 [2024-10-17 16:48:04.917988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.007 ms 00:33:28.670 [2024-10-17 16:48:04.917999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.670 [2024-10-17 16:48:04.918561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:28.670 [2024-10-17 16:48:04.918583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:28.670 [2024-10-17 16:48:04.918594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:33:28.670 [2024-10-17 16:48:04.918605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.930 [2024-10-17 16:48:04.970032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.930 [2024-10-17 16:48:04.970076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:28.930 [2024-10-17 16:48:04.970091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.930 [2024-10-17 16:48:04.970103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.930 [2024-10-17 16:48:04.970182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.930 [2024-10-17 16:48:04.970197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:28.930 [2024-10-17 16:48:04.970208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.930 [2024-10-17 16:48:04.970220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.930 [2024-10-17 16:48:04.970297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.930 [2024-10-17 16:48:04.970310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:28.930 [2024-10-17 16:48:04.970321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.930 [2024-10-17 16:48:04.970332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.930 [2024-10-17 16:48:04.970349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.930 [2024-10-17 16:48:04.970360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:28.930 [2024-10-17 16:48:04.970370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.930 [2024-10-17 16:48:04.970380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.930 [2024-10-17 16:48:05.097505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.930 [2024-10-17 16:48:05.097570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:28.931 [2024-10-17 16:48:05.097588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.931 [2024-10-17 16:48:05.097598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.931 [2024-10-17 16:48:05.198040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.931 [2024-10-17 16:48:05.198104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:28.931 [2024-10-17 16:48:05.198121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.931 [2024-10-17 16:48:05.198132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.931 [2024-10-17 16:48:05.198228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.931 [2024-10-17 16:48:05.198247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:28.931 [2024-10-17 16:48:05.198258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.931 [2024-10-17 16:48:05.198268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.931 [2024-10-17 16:48:05.198317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.931 [2024-10-17 16:48:05.198329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:28.931 [2024-10-17 16:48:05.198339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.931 [2024-10-17 16:48:05.198349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.931 [2024-10-17 16:48:05.198456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.931 [2024-10-17 16:48:05.198474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:28.931 [2024-10-17 16:48:05.198484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.931 [2024-10-17 16:48:05.198494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.931 [2024-10-17 16:48:05.198529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.931 [2024-10-17 16:48:05.198542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:28.931 [2024-10-17 16:48:05.198553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.931 [2024-10-17 16:48:05.198563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.931 [2024-10-17 16:48:05.198601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.931 [2024-10-17 16:48:05.198612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:28.931 [2024-10-17 16:48:05.198626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.931 [2024-10-17 16:48:05.198636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.931 [2024-10-17 16:48:05.198718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:28.931 [2024-10-17 16:48:05.198734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:28.931 [2024-10-17 16:48:05.198745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:28.931 [2024-10-17 16:48:05.198756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:28.931 [2024-10-17 16:48:05.198926] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 524.092 ms, result 0 00:33:30.310 00:33:30.310 00:33:30.310 16:48:06 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:31.687 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:31.687 16:48:07 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:31.946 [2024-10-17 16:48:08.051876] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:33:31.946 [2024-10-17 16:48:08.051995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77266 ] 00:33:31.946 [2024-10-17 16:48:08.223643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.205 [2024-10-17 16:48:08.343285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.465 [2024-10-17 16:48:08.699891] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:32.465 [2024-10-17 16:48:08.699978] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:32.725 [2024-10-17 16:48:08.861102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.725 [2024-10-17 16:48:08.861166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:32.725 [2024-10-17 16:48:08.861181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:32.725 [2024-10-17 16:48:08.861199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.725 [2024-10-17 16:48:08.861252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.725 [2024-10-17 16:48:08.861266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:32.725 [2024-10-17 16:48:08.861277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:32.725 [2024-10-17 16:48:08.861290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.725 [2024-10-17 16:48:08.861312] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:32.725 [2024-10-17 16:48:08.862338] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:32.725 [2024-10-17 16:48:08.862373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.725 [2024-10-17 16:48:08.862388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:32.725 [2024-10-17 16:48:08.862400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:33:32.725 [2024-10-17 16:48:08.862410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.725 [2024-10-17 16:48:08.863909] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:32.725 [2024-10-17 16:48:08.882700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.725 [2024-10-17 16:48:08.882742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:32.725 [2024-10-17 16:48:08.882757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.822 ms 00:33:32.725 [2024-10-17 16:48:08.882767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.725 [2024-10-17 16:48:08.882846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.725 [2024-10-17 16:48:08.882862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:32.725 [2024-10-17 16:48:08.882873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:33:32.725 [2024-10-17 16:48:08.882884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.725 [2024-10-17 16:48:08.889649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.725 [2024-10-17 16:48:08.889679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:32.725 [2024-10-17 16:48:08.889691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.704 ms 00:33:32.725 [2024-10-17 16:48:08.889710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.725 [2024-10-17 16:48:08.889791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.725 [2024-10-17 16:48:08.889806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:32.725 [2024-10-17 16:48:08.889817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:33:32.725 [2024-10-17 16:48:08.889827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.725 [2024-10-17 16:48:08.889868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.725 [2024-10-17 16:48:08.889880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:32.725 [2024-10-17 16:48:08.889891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:32.725 [2024-10-17 16:48:08.889901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.725 [2024-10-17 16:48:08.889926] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:32.725 [2024-10-17 16:48:08.894688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.725 [2024-10-17 16:48:08.894728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:32.725 [2024-10-17 16:48:08.894741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.775 ms 00:33:32.725 [2024-10-17 16:48:08.894751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.725 [2024-10-17 16:48:08.894785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.725 [2024-10-17 16:48:08.894796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:32.725 [2024-10-17 16:48:08.894807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:32.725 [2024-10-17 16:48:08.894817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.725 [2024-10-17 16:48:08.894870] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:32.725 [2024-10-17 16:48:08.894894] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:32.725 [2024-10-17 16:48:08.894947] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:32.725 [2024-10-17 16:48:08.894974] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:32.725 [2024-10-17 16:48:08.895068] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:32.725 [2024-10-17 16:48:08.895082] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:32.725 [2024-10-17 16:48:08.895095] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:32.725 [2024-10-17 16:48:08.895109] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:32.725 [2024-10-17 16:48:08.895121] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:32.725 [2024-10-17 16:48:08.895133] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:32.725 [2024-10-17 16:48:08.895143] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:32.725 [2024-10-17 16:48:08.895153] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:32.725 [2024-10-17 16:48:08.895163] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:32.725 [2024-10-17 16:48:08.895175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.725 [2024-10-17 16:48:08.895188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:32.725 [2024-10-17 16:48:08.895199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:33:32.725 [2024-10-17 16:48:08.895209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.725 [2024-10-17 16:48:08.895281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.725 [2024-10-17 16:48:08.895292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:32.725 [2024-10-17 16:48:08.895303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:33:32.725 [2024-10-17 16:48:08.895313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.725 [2024-10-17 16:48:08.895404] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:32.725 [2024-10-17 16:48:08.895419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:32.725 [2024-10-17 16:48:08.895434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:32.725 [2024-10-17 16:48:08.895444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:32.725 [2024-10-17 16:48:08.895455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:32.725 [2024-10-17 16:48:08.895464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:32.725 [2024-10-17 16:48:08.895474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:32.725 [2024-10-17 16:48:08.895484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:32.725 [2024-10-17 16:48:08.895493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:32.725 [2024-10-17 16:48:08.895502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:32.725 [2024-10-17 16:48:08.895512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:32.726 [2024-10-17 16:48:08.895521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:32.726 [2024-10-17 16:48:08.895531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:32.726 [2024-10-17 16:48:08.895540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:32.726 [2024-10-17 16:48:08.895550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:32.726 [2024-10-17 16:48:08.895568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:32.726 [2024-10-17 16:48:08.895577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:32.726 [2024-10-17 16:48:08.895586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:32.726 [2024-10-17 16:48:08.895595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:32.726 [2024-10-17 16:48:08.895605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:32.726 [2024-10-17 16:48:08.895614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:32.726 [2024-10-17 16:48:08.895623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:32.726 [2024-10-17 16:48:08.895633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:32.726 [2024-10-17 16:48:08.895642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:32.726 [2024-10-17 16:48:08.895652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:32.726 [2024-10-17 16:48:08.895662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:32.726 [2024-10-17 16:48:08.895671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:32.726 [2024-10-17 16:48:08.895680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:32.726 [2024-10-17 16:48:08.895689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:32.726 [2024-10-17 16:48:08.895709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:32.726 [2024-10-17 16:48:08.895719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:32.726 [2024-10-17 16:48:08.895728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:32.726 [2024-10-17 16:48:08.895737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:32.726 [2024-10-17 16:48:08.895746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:32.726 [2024-10-17 16:48:08.895755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:32.726 [2024-10-17 16:48:08.895765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:32.726 [2024-10-17 16:48:08.895774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:32.726 [2024-10-17 16:48:08.895783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:32.726 [2024-10-17 16:48:08.895792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:32.726 [2024-10-17 16:48:08.895801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:32.726 [2024-10-17 16:48:08.895810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:32.726 [2024-10-17 16:48:08.895820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:32.726 [2024-10-17 16:48:08.895829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:32.726 [2024-10-17 16:48:08.895838] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:32.726 [2024-10-17 16:48:08.895848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:32.726 [2024-10-17 16:48:08.895858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:32.726 [2024-10-17 16:48:08.895867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:32.726 [2024-10-17 16:48:08.895877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:32.726 [2024-10-17 16:48:08.895887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:32.726 [2024-10-17 16:48:08.895896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:32.726 [2024-10-17 16:48:08.895907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:32.726 [2024-10-17 16:48:08.895916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:32.726 [2024-10-17 16:48:08.895926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:32.726 [2024-10-17 16:48:08.895936] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:32.726 [2024-10-17 16:48:08.895948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:32.726 [2024-10-17 16:48:08.895959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:32.726 [2024-10-17 16:48:08.895971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:32.726 [2024-10-17 16:48:08.895981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:32.726 [2024-10-17 16:48:08.895991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:32.726 [2024-10-17 16:48:08.896002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:32.726 [2024-10-17 16:48:08.896012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:32.726 [2024-10-17 16:48:08.896022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:32.726 [2024-10-17 16:48:08.896033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:32.726 [2024-10-17 16:48:08.896043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:32.726 [2024-10-17 16:48:08.896054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:32.726 [2024-10-17 16:48:08.896064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:32.726 [2024-10-17 16:48:08.896074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:32.726 [2024-10-17 16:48:08.896085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:32.726 [2024-10-17 16:48:08.896095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:32.726 [2024-10-17 16:48:08.896106] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:32.726 [2024-10-17 16:48:08.896117] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:32.726 [2024-10-17 16:48:08.896131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:32.726 [2024-10-17 16:48:08.896141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:32.726 [2024-10-17 16:48:08.896151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:32.726 [2024-10-17 16:48:08.896164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:32.726 [2024-10-17 16:48:08.896176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.726 [2024-10-17 16:48:08.896187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:32.726 [2024-10-17 16:48:08.896197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:33:32.726 [2024-10-17 16:48:08.896207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.726 [2024-10-17 16:48:08.935838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.726 [2024-10-17 16:48:08.935881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:32.726 [2024-10-17 16:48:08.935912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.651 ms 00:33:32.726 [2024-10-17 16:48:08.935923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.726 [2024-10-17 16:48:08.936008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.726 [2024-10-17 16:48:08.936024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:32.726 [2024-10-17 16:48:08.936036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:33:32.726 [2024-10-17 16:48:08.936046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.726 [2024-10-17 16:48:08.997729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.726 [2024-10-17 16:48:08.997780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:32.726 [2024-10-17 16:48:08.997795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.718 ms 00:33:32.726 [2024-10-17 16:48:08.997806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.726 [2024-10-17 16:48:08.997887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.726 [2024-10-17 16:48:08.997901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:32.726 [2024-10-17 16:48:08.997913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:32.726 [2024-10-17 16:48:08.997923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.726 [2024-10-17 16:48:08.998425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.726 [2024-10-17 16:48:08.998448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:32.726 [2024-10-17 16:48:08.998460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:33:32.726 [2024-10-17 16:48:08.998470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.726 [2024-10-17 16:48:08.998591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.726 [2024-10-17 16:48:08.998604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:32.726 [2024-10-17 16:48:08.998615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:33:32.726 [2024-10-17 16:48:08.998624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.726 [2024-10-17 16:48:09.017860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.726 [2024-10-17 16:48:09.017912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:32.726 [2024-10-17 16:48:09.017928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.240 ms 00:33:32.726 [2024-10-17 16:48:09.017944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.985 [2024-10-17 16:48:09.037952] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:32.985 [2024-10-17 16:48:09.038019] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:32.985 [2024-10-17 16:48:09.038037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.985 [2024-10-17 16:48:09.038049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:32.985 [2024-10-17 16:48:09.038063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.980 ms 00:33:32.985 [2024-10-17 16:48:09.038073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.985 [2024-10-17 16:48:09.069176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.985 [2024-10-17 16:48:09.069241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:32.985 [2024-10-17 16:48:09.069263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.085 ms 00:33:32.985 [2024-10-17 16:48:09.069275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.985 [2024-10-17 16:48:09.087283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.985 [2024-10-17 16:48:09.087325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:32.985 [2024-10-17 16:48:09.087338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.975 ms 00:33:32.985 [2024-10-17 16:48:09.087349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.985 [2024-10-17 16:48:09.105602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.985 [2024-10-17 16:48:09.105654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:32.985 [2024-10-17 16:48:09.105677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.228 ms 00:33:32.985 [2024-10-17 16:48:09.105690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.985 [2024-10-17 16:48:09.106596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.985 [2024-10-17 16:48:09.106636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:32.985 [2024-10-17 16:48:09.106652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:33:32.985 [2024-10-17 16:48:09.106665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.985 [2024-10-17 16:48:09.193391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.985 [2024-10-17 16:48:09.193465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:32.985 [2024-10-17 16:48:09.193482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.808 ms 00:33:32.985 [2024-10-17 16:48:09.193499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.986 [2024-10-17 16:48:09.204582] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:32.986 [2024-10-17 16:48:09.207749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.986 [2024-10-17 16:48:09.207779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:32.986 [2024-10-17 16:48:09.207794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.201 ms 00:33:32.986 [2024-10-17 16:48:09.207804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.986 [2024-10-17 16:48:09.207925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.986 [2024-10-17 16:48:09.207939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:32.986 [2024-10-17 16:48:09.207951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:32.986 [2024-10-17 16:48:09.207961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.986 [2024-10-17 16:48:09.208054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.986 [2024-10-17 16:48:09.208067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:32.986 [2024-10-17 16:48:09.208078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:33:32.986 [2024-10-17 16:48:09.208088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.986 [2024-10-17 16:48:09.208113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.986 [2024-10-17 16:48:09.208124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:32.986 [2024-10-17 16:48:09.208133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:32.986 [2024-10-17 16:48:09.208143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.986 [2024-10-17 16:48:09.208174] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:32.986 [2024-10-17 16:48:09.208186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.986 [2024-10-17 16:48:09.208200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:32.986 [2024-10-17 16:48:09.208212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:32.986 [2024-10-17 16:48:09.208222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.986 [2024-10-17 16:48:09.243760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.986 [2024-10-17 16:48:09.243800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:32.986 [2024-10-17 16:48:09.243815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.572 ms 00:33:32.986 [2024-10-17 16:48:09.243825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.986 [2024-10-17 16:48:09.243926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:32.986 [2024-10-17 16:48:09.243939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:32.986 [2024-10-17 16:48:09.243951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:32.986 [2024-10-17 16:48:09.243961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:32.986 [2024-10-17 16:48:09.245114] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 384.141 ms, result 0 00:33:34.362  [2024-10-17T16:48:11.598Z] Copying: 29/1024 [MB] (29 MBps) [2024-10-17T16:48:12.534Z] Copying: 58/1024 [MB] (28 MBps) [2024-10-17T16:48:13.470Z] Copying: 86/1024 [MB] (27 MBps) [2024-10-17T16:48:14.407Z] Copying: 113/1024 [MB] (27 MBps) [2024-10-17T16:48:15.345Z] Copying: 140/1024 [MB] (27 MBps) [2024-10-17T16:48:16.281Z] Copying: 168/1024 [MB] (27 MBps) [2024-10-17T16:48:17.656Z] Copying: 195/1024 [MB] (27 MBps) [2024-10-17T16:48:18.591Z] Copying: 222/1024 [MB] (27 MBps) [2024-10-17T16:48:19.532Z] Copying: 248/1024 [MB] (26 MBps) [2024-10-17T16:48:20.465Z] Copying: 275/1024 [MB] (26 MBps) [2024-10-17T16:48:21.401Z] Copying: 302/1024 [MB] (27 MBps) [2024-10-17T16:48:22.339Z] Copying: 329/1024 [MB] (26 MBps) [2024-10-17T16:48:23.276Z] Copying: 356/1024 [MB] (27 MBps) [2024-10-17T16:48:24.656Z] Copying: 383/1024 [MB] (27 MBps) [2024-10-17T16:48:25.599Z] Copying: 411/1024 [MB] (27 MBps) [2024-10-17T16:48:26.536Z] Copying: 438/1024 [MB] (27 MBps) [2024-10-17T16:48:27.473Z] Copying: 465/1024 [MB] (27 MBps) [2024-10-17T16:48:28.408Z] Copying: 493/1024 [MB] (27 MBps) [2024-10-17T16:48:29.351Z] Copying: 520/1024 [MB] (26 MBps) [2024-10-17T16:48:30.287Z] Copying: 549/1024 [MB] (28 MBps) [2024-10-17T16:48:31.663Z] Copying: 577/1024 [MB] (28 MBps) [2024-10-17T16:48:32.230Z] Copying: 605/1024 [MB] (27 MBps) [2024-10-17T16:48:33.605Z] Copying: 632/1024 [MB] (27 MBps) [2024-10-17T16:48:34.540Z] Copying: 660/1024 [MB] (27 MBps) [2024-10-17T16:48:35.475Z] Copying: 687/1024 [MB] (27 MBps) [2024-10-17T16:48:36.409Z] Copying: 714/1024 [MB] (26 MBps) [2024-10-17T16:48:37.342Z] Copying: 741/1024 [MB] (26 MBps) [2024-10-17T16:48:38.275Z] Copying: 767/1024 [MB] (26 MBps) [2024-10-17T16:48:39.263Z] Copying: 794/1024 [MB] (27 MBps) [2024-10-17T16:48:40.640Z] Copying: 822/1024 [MB] (27 MBps) [2024-10-17T16:48:41.623Z] Copying: 852/1024 [MB] (30 MBps) [2024-10-17T16:48:42.556Z] Copying: 880/1024 [MB] (28 MBps) [2024-10-17T16:48:43.522Z] Copying: 907/1024 [MB] (27 MBps) [2024-10-17T16:48:44.457Z] Copying: 934/1024 [MB] (27 MBps) [2024-10-17T16:48:45.391Z] Copying: 964/1024 [MB] (29 MBps) [2024-10-17T16:48:46.325Z] Copying: 992/1024 [MB] (28 MBps) [2024-10-17T16:48:47.261Z] Copying: 1020/1024 [MB] (27 MBps) [2024-10-17T16:48:47.261Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-10-17 16:48:47.094638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.962 [2024-10-17 16:48:47.094961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:10.962 [2024-10-17 16:48:47.095067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:10.962 [2024-10-17 16:48:47.095109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.962 [2024-10-17 16:48:47.096313] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:10.962 [2024-10-17 16:48:47.104113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.962 [2024-10-17 16:48:47.104351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:10.962 [2024-10-17 16:48:47.104462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.652 ms 00:34:10.962 [2024-10-17 16:48:47.104481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.962 [2024-10-17 16:48:47.114631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.962 [2024-10-17 16:48:47.114833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:10.962 [2024-10-17 16:48:47.114939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.318 ms 00:34:10.962 [2024-10-17 16:48:47.114982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.962 [2024-10-17 16:48:47.139273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.962 [2024-10-17 16:48:47.139572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:10.962 [2024-10-17 16:48:47.139674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.270 ms 00:34:10.962 [2024-10-17 16:48:47.139736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.962 [2024-10-17 16:48:47.144738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.962 [2024-10-17 16:48:47.144910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:10.962 [2024-10-17 16:48:47.144998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.932 ms 00:34:10.962 [2024-10-17 16:48:47.145039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.962 [2024-10-17 16:48:47.183152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.962 [2024-10-17 16:48:47.183323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:10.962 [2024-10-17 16:48:47.183433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.049 ms 00:34:10.962 [2024-10-17 16:48:47.183478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.962 [2024-10-17 16:48:47.206109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.962 [2024-10-17 16:48:47.206264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:10.962 [2024-10-17 16:48:47.206299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.597 ms 00:34:10.962 [2024-10-17 16:48:47.206312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.221 [2024-10-17 16:48:47.334613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.221 [2024-10-17 16:48:47.334723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:11.221 [2024-10-17 16:48:47.334744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 128.412 ms 00:34:11.221 [2024-10-17 16:48:47.334758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.221 [2024-10-17 16:48:47.373847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.221 [2024-10-17 16:48:47.373907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:11.221 [2024-10-17 16:48:47.373926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.127 ms 00:34:11.221 [2024-10-17 16:48:47.373940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.221 [2024-10-17 16:48:47.410672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.221 [2024-10-17 16:48:47.410752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:11.221 [2024-10-17 16:48:47.410769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.741 ms 00:34:11.221 [2024-10-17 16:48:47.410782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.221 [2024-10-17 16:48:47.446846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.221 [2024-10-17 16:48:47.446895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:11.221 [2024-10-17 16:48:47.446910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.074 ms 00:34:11.221 [2024-10-17 16:48:47.446923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.221 [2024-10-17 16:48:47.483562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.221 [2024-10-17 16:48:47.483609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:11.221 [2024-10-17 16:48:47.483625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.605 ms 00:34:11.221 [2024-10-17 16:48:47.483638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.221 [2024-10-17 16:48:47.483682] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:11.221 [2024-10-17 16:48:47.483720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 113920 / 261120 wr_cnt: 1 state: open 00:34:11.221 [2024-10-17 16:48:47.483738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:11.221 [2024-10-17 16:48:47.483753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:11.221 [2024-10-17 16:48:47.483767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:11.221 [2024-10-17 16:48:47.483780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:11.221 [2024-10-17 16:48:47.483794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:11.221 [2024-10-17 16:48:47.483807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:11.221 [2024-10-17 16:48:47.483820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:11.221 [2024-10-17 16:48:47.483833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:11.221 [2024-10-17 16:48:47.483845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:11.221 [2024-10-17 16:48:47.483858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.483872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.483885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.483898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.483911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.483924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.483937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.483950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.483963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.483975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.483988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.484993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.485009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.485024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.485037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.485050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.485063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:11.222 [2024-10-17 16:48:47.485084] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:11.222 [2024-10-17 16:48:47.485096] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dadd3054-484d-499b-9d35-ce881e09f580 00:34:11.222 [2024-10-17 16:48:47.485110] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 113920 00:34:11.222 [2024-10-17 16:48:47.485122] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 114880 00:34:11.222 [2024-10-17 16:48:47.485134] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 113920 00:34:11.222 [2024-10-17 16:48:47.485148] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0084 00:34:11.222 [2024-10-17 16:48:47.485160] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:11.223 [2024-10-17 16:48:47.485173] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:11.223 [2024-10-17 16:48:47.485203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:11.223 [2024-10-17 16:48:47.485215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:11.223 [2024-10-17 16:48:47.485226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:11.223 [2024-10-17 16:48:47.485239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.223 [2024-10-17 16:48:47.485259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:11.223 [2024-10-17 16:48:47.485271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.561 ms 00:34:11.223 [2024-10-17 16:48:47.485284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.223 [2024-10-17 16:48:47.506331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.223 [2024-10-17 16:48:47.506374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:11.223 [2024-10-17 16:48:47.506390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.018 ms 00:34:11.223 [2024-10-17 16:48:47.506403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.223 [2024-10-17 16:48:47.507089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:11.223 [2024-10-17 16:48:47.507115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:11.223 [2024-10-17 16:48:47.507130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:34:11.223 [2024-10-17 16:48:47.507142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.481 [2024-10-17 16:48:47.563465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.481 [2024-10-17 16:48:47.563512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:11.481 [2024-10-17 16:48:47.563535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.481 [2024-10-17 16:48:47.563554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.481 [2024-10-17 16:48:47.563623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.481 [2024-10-17 16:48:47.563638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:11.481 [2024-10-17 16:48:47.563651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.481 [2024-10-17 16:48:47.563665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.481 [2024-10-17 16:48:47.563794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.481 [2024-10-17 16:48:47.563812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:11.481 [2024-10-17 16:48:47.563826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.481 [2024-10-17 16:48:47.563848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.481 [2024-10-17 16:48:47.563879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.481 [2024-10-17 16:48:47.563893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:11.481 [2024-10-17 16:48:47.563905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.481 [2024-10-17 16:48:47.563918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.481 [2024-10-17 16:48:47.698959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.481 [2024-10-17 16:48:47.699038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:11.481 [2024-10-17 16:48:47.699057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.481 [2024-10-17 16:48:47.699080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.739 [2024-10-17 16:48:47.803606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.739 [2024-10-17 16:48:47.803683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:11.739 [2024-10-17 16:48:47.803717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.739 [2024-10-17 16:48:47.803731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.739 [2024-10-17 16:48:47.803870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.739 [2024-10-17 16:48:47.803887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:11.739 [2024-10-17 16:48:47.803901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.739 [2024-10-17 16:48:47.803914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.739 [2024-10-17 16:48:47.803973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.739 [2024-10-17 16:48:47.803988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:11.739 [2024-10-17 16:48:47.804001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.739 [2024-10-17 16:48:47.804013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.739 [2024-10-17 16:48:47.804165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.739 [2024-10-17 16:48:47.804191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:11.739 [2024-10-17 16:48:47.804204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.739 [2024-10-17 16:48:47.804218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.739 [2024-10-17 16:48:47.804266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.739 [2024-10-17 16:48:47.804287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:11.739 [2024-10-17 16:48:47.804301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.739 [2024-10-17 16:48:47.804314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.739 [2024-10-17 16:48:47.804367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.739 [2024-10-17 16:48:47.804382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:11.739 [2024-10-17 16:48:47.804404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.739 [2024-10-17 16:48:47.804417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.739 [2024-10-17 16:48:47.804485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.739 [2024-10-17 16:48:47.804501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:11.739 [2024-10-17 16:48:47.804514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.739 [2024-10-17 16:48:47.804527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.739 [2024-10-17 16:48:47.804693] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 712.417 ms, result 0 00:34:13.112 00:34:13.112 00:34:13.112 16:48:49 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:13.370 [2024-10-17 16:48:49.447902] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:34:13.370 [2024-10-17 16:48:49.448557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77691 ] 00:34:13.370 [2024-10-17 16:48:49.619198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.628 [2024-10-17 16:48:49.761528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:13.886 [2024-10-17 16:48:50.180115] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:13.886 [2024-10-17 16:48:50.180203] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:14.146 [2024-10-17 16:48:50.347522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.146 [2024-10-17 16:48:50.347595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:14.146 [2024-10-17 16:48:50.347615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:14.146 [2024-10-17 16:48:50.347637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.146 [2024-10-17 16:48:50.347713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.146 [2024-10-17 16:48:50.347729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:14.146 [2024-10-17 16:48:50.347744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:34:14.146 [2024-10-17 16:48:50.347760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.146 [2024-10-17 16:48:50.347790] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:14.146 [2024-10-17 16:48:50.348840] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:14.146 [2024-10-17 16:48:50.348886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.146 [2024-10-17 16:48:50.348905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:14.146 [2024-10-17 16:48:50.348921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:34:14.146 [2024-10-17 16:48:50.348934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.146 [2024-10-17 16:48:50.351432] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:14.146 [2024-10-17 16:48:50.372010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.146 [2024-10-17 16:48:50.372059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:14.146 [2024-10-17 16:48:50.372078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.612 ms 00:34:14.146 [2024-10-17 16:48:50.372091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.146 [2024-10-17 16:48:50.372168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.146 [2024-10-17 16:48:50.372188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:14.146 [2024-10-17 16:48:50.372201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:34:14.146 [2024-10-17 16:48:50.372214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.146 [2024-10-17 16:48:50.384303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.146 [2024-10-17 16:48:50.384337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:14.146 [2024-10-17 16:48:50.384353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.023 ms 00:34:14.146 [2024-10-17 16:48:50.384365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.146 [2024-10-17 16:48:50.384487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.146 [2024-10-17 16:48:50.384504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:14.146 [2024-10-17 16:48:50.384519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:34:14.146 [2024-10-17 16:48:50.384531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.146 [2024-10-17 16:48:50.384598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.146 [2024-10-17 16:48:50.384612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:14.146 [2024-10-17 16:48:50.384625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:14.146 [2024-10-17 16:48:50.384637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.146 [2024-10-17 16:48:50.384667] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:14.146 [2024-10-17 16:48:50.390426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.146 [2024-10-17 16:48:50.390464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:14.146 [2024-10-17 16:48:50.390479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.775 ms 00:34:14.146 [2024-10-17 16:48:50.390491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.146 [2024-10-17 16:48:50.390531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.146 [2024-10-17 16:48:50.390543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:14.146 [2024-10-17 16:48:50.390556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:14.146 [2024-10-17 16:48:50.390568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.146 [2024-10-17 16:48:50.390610] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:14.146 [2024-10-17 16:48:50.390642] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:14.146 [2024-10-17 16:48:50.390681] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:14.146 [2024-10-17 16:48:50.390718] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:14.146 [2024-10-17 16:48:50.390812] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:14.146 [2024-10-17 16:48:50.390828] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:14.146 [2024-10-17 16:48:50.390844] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:14.146 [2024-10-17 16:48:50.390858] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:14.146 [2024-10-17 16:48:50.390872] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:14.146 [2024-10-17 16:48:50.390885] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:14.146 [2024-10-17 16:48:50.390896] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:14.146 [2024-10-17 16:48:50.390909] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:14.146 [2024-10-17 16:48:50.390921] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:14.146 [2024-10-17 16:48:50.390934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.146 [2024-10-17 16:48:50.390952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:14.146 [2024-10-17 16:48:50.390965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:34:14.146 [2024-10-17 16:48:50.390976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.146 [2024-10-17 16:48:50.391048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.146 [2024-10-17 16:48:50.391061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:14.146 [2024-10-17 16:48:50.391074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:34:14.146 [2024-10-17 16:48:50.391086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.146 [2024-10-17 16:48:50.391206] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:14.146 [2024-10-17 16:48:50.391225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:14.146 [2024-10-17 16:48:50.391244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:14.146 [2024-10-17 16:48:50.391257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.146 [2024-10-17 16:48:50.391271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:14.146 [2024-10-17 16:48:50.391282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:14.146 [2024-10-17 16:48:50.391293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:14.146 [2024-10-17 16:48:50.391305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:14.146 [2024-10-17 16:48:50.391317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:14.146 [2024-10-17 16:48:50.391328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:14.146 [2024-10-17 16:48:50.391341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:14.146 [2024-10-17 16:48:50.391352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:14.146 [2024-10-17 16:48:50.391377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:14.146 [2024-10-17 16:48:50.391389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:14.146 [2024-10-17 16:48:50.391401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:14.146 [2024-10-17 16:48:50.391426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.146 [2024-10-17 16:48:50.391438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:14.146 [2024-10-17 16:48:50.391449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:14.146 [2024-10-17 16:48:50.391460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.146 [2024-10-17 16:48:50.391471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:14.146 [2024-10-17 16:48:50.391483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:14.146 [2024-10-17 16:48:50.391494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:14.146 [2024-10-17 16:48:50.391505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:14.146 [2024-10-17 16:48:50.391516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:14.146 [2024-10-17 16:48:50.391527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:14.146 [2024-10-17 16:48:50.391538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:14.146 [2024-10-17 16:48:50.391550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:14.146 [2024-10-17 16:48:50.391560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:14.146 [2024-10-17 16:48:50.391571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:14.147 [2024-10-17 16:48:50.391581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:14.147 [2024-10-17 16:48:50.391592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:14.147 [2024-10-17 16:48:50.391604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:14.147 [2024-10-17 16:48:50.391615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:14.147 [2024-10-17 16:48:50.391626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:14.147 [2024-10-17 16:48:50.391636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:14.147 [2024-10-17 16:48:50.391647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:14.147 [2024-10-17 16:48:50.391657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:14.147 [2024-10-17 16:48:50.391668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:14.147 [2024-10-17 16:48:50.391679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:14.147 [2024-10-17 16:48:50.391690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.147 [2024-10-17 16:48:50.391714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:14.147 [2024-10-17 16:48:50.391726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:14.147 [2024-10-17 16:48:50.391738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.147 [2024-10-17 16:48:50.391749] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:14.147 [2024-10-17 16:48:50.391762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:14.147 [2024-10-17 16:48:50.391775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:14.147 [2024-10-17 16:48:50.391788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.147 [2024-10-17 16:48:50.391800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:14.147 [2024-10-17 16:48:50.391812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:14.147 [2024-10-17 16:48:50.391823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:14.147 [2024-10-17 16:48:50.391834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:14.147 [2024-10-17 16:48:50.391846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:14.147 [2024-10-17 16:48:50.391857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:14.147 [2024-10-17 16:48:50.391870] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:14.147 [2024-10-17 16:48:50.391884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:14.147 [2024-10-17 16:48:50.391897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:14.147 [2024-10-17 16:48:50.391909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:14.147 [2024-10-17 16:48:50.391921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:14.147 [2024-10-17 16:48:50.391933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:14.147 [2024-10-17 16:48:50.391945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:14.147 [2024-10-17 16:48:50.391957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:14.147 [2024-10-17 16:48:50.391969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:14.147 [2024-10-17 16:48:50.391980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:14.147 [2024-10-17 16:48:50.391992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:14.147 [2024-10-17 16:48:50.392004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:14.147 [2024-10-17 16:48:50.392016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:14.147 [2024-10-17 16:48:50.392027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:14.147 [2024-10-17 16:48:50.392039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:14.147 [2024-10-17 16:48:50.392052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:14.147 [2024-10-17 16:48:50.392063] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:14.147 [2024-10-17 16:48:50.392077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:14.147 [2024-10-17 16:48:50.392095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:14.147 [2024-10-17 16:48:50.392108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:14.147 [2024-10-17 16:48:50.392119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:14.147 [2024-10-17 16:48:50.392132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:14.147 [2024-10-17 16:48:50.392145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.147 [2024-10-17 16:48:50.392158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:14.147 [2024-10-17 16:48:50.392170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:34:14.147 [2024-10-17 16:48:50.392183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.147 [2024-10-17 16:48:50.439797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.147 [2024-10-17 16:48:50.439839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:14.147 [2024-10-17 16:48:50.439855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.636 ms 00:34:14.147 [2024-10-17 16:48:50.439868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.147 [2024-10-17 16:48:50.439949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.147 [2024-10-17 16:48:50.439970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:14.147 [2024-10-17 16:48:50.439983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:34:14.147 [2024-10-17 16:48:50.439995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.406 [2024-10-17 16:48:50.500377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.406 [2024-10-17 16:48:50.500440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:14.406 [2024-10-17 16:48:50.500457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.395 ms 00:34:14.406 [2024-10-17 16:48:50.500470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.406 [2024-10-17 16:48:50.500511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.406 [2024-10-17 16:48:50.500525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:14.406 [2024-10-17 16:48:50.500539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:14.406 [2024-10-17 16:48:50.500558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.406 [2024-10-17 16:48:50.501393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.406 [2024-10-17 16:48:50.501419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:14.406 [2024-10-17 16:48:50.501433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:34:14.406 [2024-10-17 16:48:50.501446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.406 [2024-10-17 16:48:50.501584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.406 [2024-10-17 16:48:50.501602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:14.406 [2024-10-17 16:48:50.501615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:34:14.406 [2024-10-17 16:48:50.501627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.406 [2024-10-17 16:48:50.525491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.406 [2024-10-17 16:48:50.525531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:14.406 [2024-10-17 16:48:50.525547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.868 ms 00:34:14.406 [2024-10-17 16:48:50.525566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.406 [2024-10-17 16:48:50.545825] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:14.406 [2024-10-17 16:48:50.545872] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:14.406 [2024-10-17 16:48:50.545891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.406 [2024-10-17 16:48:50.545905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:14.406 [2024-10-17 16:48:50.545920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.214 ms 00:34:14.406 [2024-10-17 16:48:50.545932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.406 [2024-10-17 16:48:50.575550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.406 [2024-10-17 16:48:50.575595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:14.406 [2024-10-17 16:48:50.575619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.612 ms 00:34:14.406 [2024-10-17 16:48:50.575631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.406 [2024-10-17 16:48:50.593724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.406 [2024-10-17 16:48:50.593785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:14.406 [2024-10-17 16:48:50.593800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.074 ms 00:34:14.406 [2024-10-17 16:48:50.593812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.406 [2024-10-17 16:48:50.611195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.406 [2024-10-17 16:48:50.611236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:14.406 [2024-10-17 16:48:50.611252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.314 ms 00:34:14.406 [2024-10-17 16:48:50.611264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.406 [2024-10-17 16:48:50.611992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.406 [2024-10-17 16:48:50.612028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:14.406 [2024-10-17 16:48:50.612043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:34:14.406 [2024-10-17 16:48:50.612056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.664 [2024-10-17 16:48:50.707598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.664 [2024-10-17 16:48:50.707672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:14.664 [2024-10-17 16:48:50.707692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.662 ms 00:34:14.664 [2024-10-17 16:48:50.707737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.664 [2024-10-17 16:48:50.718647] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:14.664 [2024-10-17 16:48:50.722065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.664 [2024-10-17 16:48:50.722103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:14.664 [2024-10-17 16:48:50.722119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.263 ms 00:34:14.664 [2024-10-17 16:48:50.722133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.664 [2024-10-17 16:48:50.722247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.664 [2024-10-17 16:48:50.722264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:14.664 [2024-10-17 16:48:50.722280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:14.664 [2024-10-17 16:48:50.722293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.665 [2024-10-17 16:48:50.724518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.665 [2024-10-17 16:48:50.724565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:14.665 [2024-10-17 16:48:50.724581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:34:14.665 [2024-10-17 16:48:50.724594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.665 [2024-10-17 16:48:50.724640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.665 [2024-10-17 16:48:50.724654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:14.665 [2024-10-17 16:48:50.724667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:14.665 [2024-10-17 16:48:50.724680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.665 [2024-10-17 16:48:50.724749] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:14.665 [2024-10-17 16:48:50.724767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.665 [2024-10-17 16:48:50.724786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:14.665 [2024-10-17 16:48:50.724800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:34:14.665 [2024-10-17 16:48:50.724812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.665 [2024-10-17 16:48:50.762896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.665 [2024-10-17 16:48:50.762943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:14.665 [2024-10-17 16:48:50.762961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.115 ms 00:34:14.665 [2024-10-17 16:48:50.762975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.665 [2024-10-17 16:48:50.763080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.665 [2024-10-17 16:48:50.763096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:14.665 [2024-10-17 16:48:50.763111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:34:14.665 [2024-10-17 16:48:50.763123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.665 [2024-10-17 16:48:50.764649] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 417.218 ms, result 0 00:34:16.040  [2024-10-17T16:48:53.277Z] Copying: 22/1024 [MB] (22 MBps) [2024-10-17T16:48:54.211Z] Copying: 47/1024 [MB] (25 MBps) [2024-10-17T16:48:55.172Z] Copying: 73/1024 [MB] (25 MBps) [2024-10-17T16:48:56.105Z] Copying: 98/1024 [MB] (25 MBps) [2024-10-17T16:48:57.040Z] Copying: 123/1024 [MB] (24 MBps) [2024-10-17T16:48:58.413Z] Copying: 153/1024 [MB] (30 MBps) [2024-10-17T16:48:59.347Z] Copying: 183/1024 [MB] (29 MBps) [2024-10-17T16:49:00.282Z] Copying: 211/1024 [MB] (28 MBps) [2024-10-17T16:49:01.216Z] Copying: 239/1024 [MB] (27 MBps) [2024-10-17T16:49:02.151Z] Copying: 268/1024 [MB] (28 MBps) [2024-10-17T16:49:03.086Z] Copying: 295/1024 [MB] (27 MBps) [2024-10-17T16:49:04.022Z] Copying: 322/1024 [MB] (27 MBps) [2024-10-17T16:49:05.397Z] Copying: 350/1024 [MB] (27 MBps) [2024-10-17T16:49:06.332Z] Copying: 378/1024 [MB] (27 MBps) [2024-10-17T16:49:07.267Z] Copying: 405/1024 [MB] (27 MBps) [2024-10-17T16:49:08.203Z] Copying: 433/1024 [MB] (28 MBps) [2024-10-17T16:49:09.139Z] Copying: 462/1024 [MB] (29 MBps) [2024-10-17T16:49:10.073Z] Copying: 492/1024 [MB] (29 MBps) [2024-10-17T16:49:11.010Z] Copying: 520/1024 [MB] (28 MBps) [2024-10-17T16:49:12.406Z] Copying: 549/1024 [MB] (28 MBps) [2024-10-17T16:49:12.974Z] Copying: 577/1024 [MB] (28 MBps) [2024-10-17T16:49:14.352Z] Copying: 606/1024 [MB] (28 MBps) [2024-10-17T16:49:15.289Z] Copying: 634/1024 [MB] (27 MBps) [2024-10-17T16:49:16.226Z] Copying: 663/1024 [MB] (29 MBps) [2024-10-17T16:49:17.163Z] Copying: 692/1024 [MB] (28 MBps) [2024-10-17T16:49:18.102Z] Copying: 720/1024 [MB] (27 MBps) [2024-10-17T16:49:19.037Z] Copying: 748/1024 [MB] (27 MBps) [2024-10-17T16:49:19.972Z] Copying: 775/1024 [MB] (26 MBps) [2024-10-17T16:49:21.348Z] Copying: 800/1024 [MB] (25 MBps) [2024-10-17T16:49:22.283Z] Copying: 826/1024 [MB] (25 MBps) [2024-10-17T16:49:23.250Z] Copying: 851/1024 [MB] (25 MBps) [2024-10-17T16:49:24.186Z] Copying: 876/1024 [MB] (25 MBps) [2024-10-17T16:49:25.120Z] Copying: 902/1024 [MB] (25 MBps) [2024-10-17T16:49:26.175Z] Copying: 931/1024 [MB] (28 MBps) [2024-10-17T16:49:27.139Z] Copying: 959/1024 [MB] (28 MBps) [2024-10-17T16:49:28.076Z] Copying: 989/1024 [MB] (29 MBps) [2024-10-17T16:49:28.334Z] Copying: 1017/1024 [MB] (27 MBps) [2024-10-17T16:49:28.592Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-10-17 16:49:28.403773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.293 [2024-10-17 16:49:28.403841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:52.293 [2024-10-17 16:49:28.403862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:52.293 [2024-10-17 16:49:28.403876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.293 [2024-10-17 16:49:28.403906] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:52.293 [2024-10-17 16:49:28.409148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.293 [2024-10-17 16:49:28.409185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:52.293 [2024-10-17 16:49:28.409200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.217 ms 00:34:52.293 [2024-10-17 16:49:28.409213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.293 [2024-10-17 16:49:28.409449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.293 [2024-10-17 16:49:28.409469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:52.293 [2024-10-17 16:49:28.409482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:34:52.293 [2024-10-17 16:49:28.409494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.293 [2024-10-17 16:49:28.414254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.293 [2024-10-17 16:49:28.414304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:52.293 [2024-10-17 16:49:28.414318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.748 ms 00:34:52.293 [2024-10-17 16:49:28.414331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.293 [2024-10-17 16:49:28.419926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.293 [2024-10-17 16:49:28.419978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:52.293 [2024-10-17 16:49:28.419992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.563 ms 00:34:52.293 [2024-10-17 16:49:28.420002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.293 [2024-10-17 16:49:28.456785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.293 [2024-10-17 16:49:28.456840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:52.293 [2024-10-17 16:49:28.456856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.790 ms 00:34:52.293 [2024-10-17 16:49:28.456883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.293 [2024-10-17 16:49:28.476838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.293 [2024-10-17 16:49:28.476877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:52.293 [2024-10-17 16:49:28.476913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.944 ms 00:34:52.293 [2024-10-17 16:49:28.476923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.553 [2024-10-17 16:49:28.618825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.553 [2024-10-17 16:49:28.618900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:52.553 [2024-10-17 16:49:28.618917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 142.084 ms 00:34:52.553 [2024-10-17 16:49:28.618928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.553 [2024-10-17 16:49:28.655458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.553 [2024-10-17 16:49:28.655496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:34:52.553 [2024-10-17 16:49:28.655510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.570 ms 00:34:52.553 [2024-10-17 16:49:28.655520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.553 [2024-10-17 16:49:28.692788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.553 [2024-10-17 16:49:28.692846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:34:52.553 [2024-10-17 16:49:28.692879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.272 ms 00:34:52.553 [2024-10-17 16:49:28.692889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.553 [2024-10-17 16:49:28.727979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.553 [2024-10-17 16:49:28.728015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:52.553 [2024-10-17 16:49:28.728044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.095 ms 00:34:52.553 [2024-10-17 16:49:28.728054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.553 [2024-10-17 16:49:28.763058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.553 [2024-10-17 16:49:28.763094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:52.553 [2024-10-17 16:49:28.763123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.979 ms 00:34:52.553 [2024-10-17 16:49:28.763133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.553 [2024-10-17 16:49:28.763171] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:52.553 [2024-10-17 16:49:28.763187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:34:52.553 [2024-10-17 16:49:28.763199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:52.553 [2024-10-17 16:49:28.763678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.763998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:52.554 [2024-10-17 16:49:28.764277] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:52.554 [2024-10-17 16:49:28.764298] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dadd3054-484d-499b-9d35-ce881e09f580 00:34:52.554 [2024-10-17 16:49:28.764308] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:34:52.554 [2024-10-17 16:49:28.764318] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 18112 00:34:52.554 [2024-10-17 16:49:28.764328] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 17152 00:34:52.554 [2024-10-17 16:49:28.764338] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0560 00:34:52.554 [2024-10-17 16:49:28.764348] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:52.554 [2024-10-17 16:49:28.764358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:52.554 [2024-10-17 16:49:28.764369] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:52.554 [2024-10-17 16:49:28.764388] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:52.554 [2024-10-17 16:49:28.764397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:52.554 [2024-10-17 16:49:28.764407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.554 [2024-10-17 16:49:28.764429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:52.554 [2024-10-17 16:49:28.764440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:34:52.554 [2024-10-17 16:49:28.764450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.554 [2024-10-17 16:49:28.784114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.554 [2024-10-17 16:49:28.784147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:52.554 [2024-10-17 16:49:28.784160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.661 ms 00:34:52.554 [2024-10-17 16:49:28.784170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.554 [2024-10-17 16:49:28.784718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:52.554 [2024-10-17 16:49:28.784740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:52.554 [2024-10-17 16:49:28.784751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:34:52.554 [2024-10-17 16:49:28.784761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.554 [2024-10-17 16:49:28.835613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.554 [2024-10-17 16:49:28.835710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:52.554 [2024-10-17 16:49:28.835733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.554 [2024-10-17 16:49:28.835766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.554 [2024-10-17 16:49:28.835842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.554 [2024-10-17 16:49:28.835853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:52.554 [2024-10-17 16:49:28.835864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.554 [2024-10-17 16:49:28.835874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.554 [2024-10-17 16:49:28.835979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.554 [2024-10-17 16:49:28.835993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:52.554 [2024-10-17 16:49:28.836005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.554 [2024-10-17 16:49:28.836014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.554 [2024-10-17 16:49:28.836038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.554 [2024-10-17 16:49:28.836048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:52.554 [2024-10-17 16:49:28.836058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.554 [2024-10-17 16:49:28.836068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.814 [2024-10-17 16:49:28.957454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.814 [2024-10-17 16:49:28.957513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:52.814 [2024-10-17 16:49:28.957528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.814 [2024-10-17 16:49:28.957546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.814 [2024-10-17 16:49:29.058527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.814 [2024-10-17 16:49:29.058586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:52.814 [2024-10-17 16:49:29.058601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.814 [2024-10-17 16:49:29.058612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.814 [2024-10-17 16:49:29.058736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.814 [2024-10-17 16:49:29.058751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:52.814 [2024-10-17 16:49:29.058762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.814 [2024-10-17 16:49:29.058772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.814 [2024-10-17 16:49:29.058825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.814 [2024-10-17 16:49:29.058842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:52.814 [2024-10-17 16:49:29.058852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.814 [2024-10-17 16:49:29.058863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.814 [2024-10-17 16:49:29.058978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.814 [2024-10-17 16:49:29.058992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:52.814 [2024-10-17 16:49:29.059003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.814 [2024-10-17 16:49:29.059013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.814 [2024-10-17 16:49:29.059050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.814 [2024-10-17 16:49:29.059067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:52.814 [2024-10-17 16:49:29.059077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.814 [2024-10-17 16:49:29.059087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.814 [2024-10-17 16:49:29.059125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.814 [2024-10-17 16:49:29.059136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:52.814 [2024-10-17 16:49:29.059147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.814 [2024-10-17 16:49:29.059156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.814 [2024-10-17 16:49:29.059197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:52.814 [2024-10-17 16:49:29.059213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:52.814 [2024-10-17 16:49:29.059223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:52.814 [2024-10-17 16:49:29.059234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:52.814 [2024-10-17 16:49:29.059352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 656.918 ms, result 0 00:34:54.191 00:34:54.191 00:34:54.191 16:49:30 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:55.568 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:55.568 16:49:31 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:55.568 16:49:31 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:34:55.568 16:49:31 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:55.827 16:49:31 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:55.827 16:49:31 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:55.827 16:49:31 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76209 00:34:55.827 16:49:31 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 76209 ']' 00:34:55.827 16:49:31 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 76209 00:34:55.827 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76209) - No such process 00:34:55.827 Process with pid 76209 is not found 00:34:55.827 16:49:31 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 76209 is not found' 00:34:55.827 16:49:31 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:34:55.827 Remove shared memory files 00:34:55.827 16:49:31 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:55.827 16:49:31 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:34:55.827 16:49:31 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:34:55.827 16:49:32 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:34:55.827 16:49:32 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:55.827 16:49:32 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:34:55.827 00:34:55.827 real 3m3.961s 00:34:55.827 user 2m51.571s 00:34:55.827 sys 0m14.198s 00:34:55.827 16:49:32 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:55.827 16:49:32 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:34:55.827 ************************************ 00:34:55.827 END TEST ftl_restore 00:34:55.827 ************************************ 00:34:55.827 16:49:32 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:34:55.827 16:49:32 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:34:55.827 16:49:32 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:55.827 16:49:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:55.827 ************************************ 00:34:55.827 START TEST ftl_dirty_shutdown 00:34:55.827 ************************************ 00:34:55.827 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:34:56.087 * Looking for test storage... 00:34:56.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:56.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.087 --rc genhtml_branch_coverage=1 00:34:56.087 --rc genhtml_function_coverage=1 00:34:56.087 --rc genhtml_legend=1 00:34:56.087 --rc geninfo_all_blocks=1 00:34:56.087 --rc geninfo_unexecuted_blocks=1 00:34:56.087 00:34:56.087 ' 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:56.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.087 --rc genhtml_branch_coverage=1 00:34:56.087 --rc genhtml_function_coverage=1 00:34:56.087 --rc genhtml_legend=1 00:34:56.087 --rc geninfo_all_blocks=1 00:34:56.087 --rc geninfo_unexecuted_blocks=1 00:34:56.087 00:34:56.087 ' 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:56.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.087 --rc genhtml_branch_coverage=1 00:34:56.087 --rc genhtml_function_coverage=1 00:34:56.087 --rc genhtml_legend=1 00:34:56.087 --rc geninfo_all_blocks=1 00:34:56.087 --rc geninfo_unexecuted_blocks=1 00:34:56.087 00:34:56.087 ' 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:56.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.087 --rc genhtml_branch_coverage=1 00:34:56.087 --rc genhtml_function_coverage=1 00:34:56.087 --rc genhtml_legend=1 00:34:56.087 --rc geninfo_all_blocks=1 00:34:56.087 --rc geninfo_unexecuted_blocks=1 00:34:56.087 00:34:56.087 ' 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:34:56.087 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78193 00:34:56.088 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78193 00:34:56.088 16:49:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:34:56.088 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78193 ']' 00:34:56.088 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.088 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:56.088 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.088 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:56.088 16:49:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:56.347 [2024-10-17 16:49:32.447047] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:34:56.347 [2024-10-17 16:49:32.447580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78193 ] 00:34:56.347 [2024-10-17 16:49:32.620598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.606 [2024-10-17 16:49:32.742877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.540 16:49:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:57.540 16:49:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:34:57.540 16:49:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:34:57.540 16:49:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:34:57.540 16:49:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:34:57.540 16:49:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:34:57.540 16:49:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:34:57.540 16:49:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:57.799 16:49:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:34:57.799 16:49:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:34:57.799 16:49:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:34:57.799 16:49:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:34:57.799 16:49:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:34:57.799 16:49:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:34:57.799 16:49:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:34:57.799 16:49:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:34:58.058 16:49:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:34:58.058 { 00:34:58.058 "name": "nvme0n1", 00:34:58.058 "aliases": [ 00:34:58.058 "059251d2-654f-449a-bf81-3f5e48b7cc63" 00:34:58.058 ], 00:34:58.058 "product_name": "NVMe disk", 00:34:58.058 "block_size": 4096, 00:34:58.058 "num_blocks": 1310720, 00:34:58.058 "uuid": "059251d2-654f-449a-bf81-3f5e48b7cc63", 00:34:58.058 "numa_id": -1, 00:34:58.058 "assigned_rate_limits": { 00:34:58.058 "rw_ios_per_sec": 0, 00:34:58.058 "rw_mbytes_per_sec": 0, 00:34:58.058 "r_mbytes_per_sec": 0, 00:34:58.058 "w_mbytes_per_sec": 0 00:34:58.058 }, 00:34:58.058 "claimed": true, 00:34:58.058 "claim_type": "read_many_write_one", 00:34:58.058 "zoned": false, 00:34:58.058 "supported_io_types": { 00:34:58.058 "read": true, 00:34:58.058 "write": true, 00:34:58.058 "unmap": true, 00:34:58.058 "flush": true, 00:34:58.058 "reset": true, 00:34:58.058 "nvme_admin": true, 00:34:58.058 "nvme_io": true, 00:34:58.058 "nvme_io_md": false, 00:34:58.058 "write_zeroes": true, 00:34:58.058 "zcopy": false, 00:34:58.058 "get_zone_info": false, 00:34:58.058 "zone_management": false, 00:34:58.058 "zone_append": false, 00:34:58.058 "compare": true, 00:34:58.058 "compare_and_write": false, 00:34:58.058 "abort": true, 00:34:58.058 "seek_hole": false, 00:34:58.058 "seek_data": false, 00:34:58.058 "copy": true, 00:34:58.058 "nvme_iov_md": false 00:34:58.058 }, 00:34:58.058 "driver_specific": { 00:34:58.058 "nvme": [ 00:34:58.058 { 00:34:58.058 "pci_address": "0000:00:11.0", 00:34:58.058 "trid": { 00:34:58.059 "trtype": "PCIe", 00:34:58.059 "traddr": "0000:00:11.0" 00:34:58.059 }, 00:34:58.059 "ctrlr_data": { 00:34:58.059 "cntlid": 0, 00:34:58.059 "vendor_id": "0x1b36", 00:34:58.059 "model_number": "QEMU NVMe Ctrl", 00:34:58.059 "serial_number": "12341", 00:34:58.059 "firmware_revision": "8.0.0", 00:34:58.059 "subnqn": "nqn.2019-08.org.qemu:12341", 00:34:58.059 "oacs": { 00:34:58.059 "security": 0, 00:34:58.059 "format": 1, 00:34:58.059 "firmware": 0, 00:34:58.059 "ns_manage": 1 00:34:58.059 }, 00:34:58.059 "multi_ctrlr": false, 00:34:58.059 "ana_reporting": false 00:34:58.059 }, 00:34:58.059 "vs": { 00:34:58.059 "nvme_version": "1.4" 00:34:58.059 }, 00:34:58.059 "ns_data": { 00:34:58.059 "id": 1, 00:34:58.059 "can_share": false 00:34:58.059 } 00:34:58.059 } 00:34:58.059 ], 00:34:58.059 "mp_policy": "active_passive" 00:34:58.059 } 00:34:58.059 } 00:34:58.059 ]' 00:34:58.059 16:49:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:34:58.059 16:49:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:34:58.059 16:49:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:34:58.059 16:49:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:34:58.059 16:49:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:34:58.059 16:49:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:34:58.059 16:49:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:34:58.059 16:49:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:34:58.059 16:49:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:34:58.059 16:49:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:58.059 16:49:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:58.317 16:49:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=a3531b8f-b1d3-46b8-a3c3-c8b159d3cdc2 00:34:58.317 16:49:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:34:58.317 16:49:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a3531b8f-b1d3-46b8-a3c3-c8b159d3cdc2 00:34:58.575 16:49:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:34:58.575 16:49:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=f25f96c3-79dd-45dd-a09a-1e8ae50259c9 00:34:58.575 16:49:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f25f96c3-79dd-45dd-a09a-1e8ae50259c9 00:34:58.833 16:49:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=51e62053-fa92-4d27-bd4a-719e32fdf25f 00:34:58.833 16:49:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:34:58.833 16:49:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 51e62053-fa92-4d27-bd4a-719e32fdf25f 00:34:58.833 16:49:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:34:58.833 16:49:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:34:58.833 16:49:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=51e62053-fa92-4d27-bd4a-719e32fdf25f 00:34:58.833 16:49:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:34:58.833 16:49:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 51e62053-fa92-4d27-bd4a-719e32fdf25f 00:34:58.833 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=51e62053-fa92-4d27-bd4a-719e32fdf25f 00:34:58.833 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:34:58.833 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:34:58.833 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:34:58.833 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51e62053-fa92-4d27-bd4a-719e32fdf25f 00:34:59.107 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:34:59.107 { 00:34:59.107 "name": "51e62053-fa92-4d27-bd4a-719e32fdf25f", 00:34:59.107 "aliases": [ 00:34:59.107 "lvs/nvme0n1p0" 00:34:59.107 ], 00:34:59.107 "product_name": "Logical Volume", 00:34:59.107 "block_size": 4096, 00:34:59.107 "num_blocks": 26476544, 00:34:59.107 "uuid": "51e62053-fa92-4d27-bd4a-719e32fdf25f", 00:34:59.107 "assigned_rate_limits": { 00:34:59.107 "rw_ios_per_sec": 0, 00:34:59.107 "rw_mbytes_per_sec": 0, 00:34:59.107 "r_mbytes_per_sec": 0, 00:34:59.107 "w_mbytes_per_sec": 0 00:34:59.107 }, 00:34:59.107 "claimed": false, 00:34:59.107 "zoned": false, 00:34:59.107 "supported_io_types": { 00:34:59.107 "read": true, 00:34:59.107 "write": true, 00:34:59.107 "unmap": true, 00:34:59.107 "flush": false, 00:34:59.107 "reset": true, 00:34:59.107 "nvme_admin": false, 00:34:59.107 "nvme_io": false, 00:34:59.107 "nvme_io_md": false, 00:34:59.107 "write_zeroes": true, 00:34:59.107 "zcopy": false, 00:34:59.107 "get_zone_info": false, 00:34:59.107 "zone_management": false, 00:34:59.107 "zone_append": false, 00:34:59.107 "compare": false, 00:34:59.107 "compare_and_write": false, 00:34:59.107 "abort": false, 00:34:59.107 "seek_hole": true, 00:34:59.107 "seek_data": true, 00:34:59.107 "copy": false, 00:34:59.107 "nvme_iov_md": false 00:34:59.107 }, 00:34:59.107 "driver_specific": { 00:34:59.107 "lvol": { 00:34:59.107 "lvol_store_uuid": "f25f96c3-79dd-45dd-a09a-1e8ae50259c9", 00:34:59.107 "base_bdev": "nvme0n1", 00:34:59.107 "thin_provision": true, 00:34:59.107 "num_allocated_clusters": 0, 00:34:59.107 "snapshot": false, 00:34:59.107 "clone": false, 00:34:59.107 "esnap_clone": false 00:34:59.107 } 00:34:59.107 } 00:34:59.107 } 00:34:59.107 ]' 00:34:59.107 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:34:59.107 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:34:59.107 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:34:59.107 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:34:59.107 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:34:59.107 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:34:59.107 16:49:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:34:59.107 16:49:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:34:59.107 16:49:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:34:59.392 16:49:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:34:59.392 16:49:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:34:59.392 16:49:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 51e62053-fa92-4d27-bd4a-719e32fdf25f 00:34:59.392 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=51e62053-fa92-4d27-bd4a-719e32fdf25f 00:34:59.392 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:34:59.392 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:34:59.392 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:34:59.392 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51e62053-fa92-4d27-bd4a-719e32fdf25f 00:34:59.650 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:34:59.650 { 00:34:59.650 "name": "51e62053-fa92-4d27-bd4a-719e32fdf25f", 00:34:59.650 "aliases": [ 00:34:59.650 "lvs/nvme0n1p0" 00:34:59.650 ], 00:34:59.650 "product_name": "Logical Volume", 00:34:59.650 "block_size": 4096, 00:34:59.650 "num_blocks": 26476544, 00:34:59.650 "uuid": "51e62053-fa92-4d27-bd4a-719e32fdf25f", 00:34:59.650 "assigned_rate_limits": { 00:34:59.650 "rw_ios_per_sec": 0, 00:34:59.650 "rw_mbytes_per_sec": 0, 00:34:59.650 "r_mbytes_per_sec": 0, 00:34:59.650 "w_mbytes_per_sec": 0 00:34:59.650 }, 00:34:59.650 "claimed": false, 00:34:59.650 "zoned": false, 00:34:59.650 "supported_io_types": { 00:34:59.650 "read": true, 00:34:59.650 "write": true, 00:34:59.650 "unmap": true, 00:34:59.650 "flush": false, 00:34:59.650 "reset": true, 00:34:59.650 "nvme_admin": false, 00:34:59.650 "nvme_io": false, 00:34:59.650 "nvme_io_md": false, 00:34:59.650 "write_zeroes": true, 00:34:59.650 "zcopy": false, 00:34:59.650 "get_zone_info": false, 00:34:59.650 "zone_management": false, 00:34:59.650 "zone_append": false, 00:34:59.650 "compare": false, 00:34:59.650 "compare_and_write": false, 00:34:59.650 "abort": false, 00:34:59.650 "seek_hole": true, 00:34:59.650 "seek_data": true, 00:34:59.650 "copy": false, 00:34:59.650 "nvme_iov_md": false 00:34:59.650 }, 00:34:59.650 "driver_specific": { 00:34:59.650 "lvol": { 00:34:59.650 "lvol_store_uuid": "f25f96c3-79dd-45dd-a09a-1e8ae50259c9", 00:34:59.650 "base_bdev": "nvme0n1", 00:34:59.650 "thin_provision": true, 00:34:59.650 "num_allocated_clusters": 0, 00:34:59.650 "snapshot": false, 00:34:59.650 "clone": false, 00:34:59.650 "esnap_clone": false 00:34:59.650 } 00:34:59.650 } 00:34:59.650 } 00:34:59.650 ]' 00:34:59.650 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:34:59.650 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:34:59.650 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:34:59.907 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:34:59.907 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:34:59.907 16:49:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:34:59.907 16:49:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:34:59.907 16:49:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:34:59.908 16:49:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:34:59.908 16:49:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 51e62053-fa92-4d27-bd4a-719e32fdf25f 00:34:59.908 16:49:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=51e62053-fa92-4d27-bd4a-719e32fdf25f 00:34:59.908 16:49:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:34:59.908 16:49:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:34:59.908 16:49:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:34:59.908 16:49:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 51e62053-fa92-4d27-bd4a-719e32fdf25f 00:35:00.165 16:49:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:35:00.165 { 00:35:00.165 "name": "51e62053-fa92-4d27-bd4a-719e32fdf25f", 00:35:00.165 "aliases": [ 00:35:00.165 "lvs/nvme0n1p0" 00:35:00.165 ], 00:35:00.165 "product_name": "Logical Volume", 00:35:00.165 "block_size": 4096, 00:35:00.165 "num_blocks": 26476544, 00:35:00.165 "uuid": "51e62053-fa92-4d27-bd4a-719e32fdf25f", 00:35:00.165 "assigned_rate_limits": { 00:35:00.165 "rw_ios_per_sec": 0, 00:35:00.165 "rw_mbytes_per_sec": 0, 00:35:00.165 "r_mbytes_per_sec": 0, 00:35:00.165 "w_mbytes_per_sec": 0 00:35:00.165 }, 00:35:00.165 "claimed": false, 00:35:00.165 "zoned": false, 00:35:00.165 "supported_io_types": { 00:35:00.165 "read": true, 00:35:00.165 "write": true, 00:35:00.165 "unmap": true, 00:35:00.165 "flush": false, 00:35:00.165 "reset": true, 00:35:00.165 "nvme_admin": false, 00:35:00.165 "nvme_io": false, 00:35:00.165 "nvme_io_md": false, 00:35:00.165 "write_zeroes": true, 00:35:00.165 "zcopy": false, 00:35:00.165 "get_zone_info": false, 00:35:00.165 "zone_management": false, 00:35:00.165 "zone_append": false, 00:35:00.165 "compare": false, 00:35:00.165 "compare_and_write": false, 00:35:00.165 "abort": false, 00:35:00.165 "seek_hole": true, 00:35:00.165 "seek_data": true, 00:35:00.165 "copy": false, 00:35:00.165 "nvme_iov_md": false 00:35:00.165 }, 00:35:00.165 "driver_specific": { 00:35:00.165 "lvol": { 00:35:00.165 "lvol_store_uuid": "f25f96c3-79dd-45dd-a09a-1e8ae50259c9", 00:35:00.165 "base_bdev": "nvme0n1", 00:35:00.165 "thin_provision": true, 00:35:00.165 "num_allocated_clusters": 0, 00:35:00.165 "snapshot": false, 00:35:00.165 "clone": false, 00:35:00.165 "esnap_clone": false 00:35:00.165 } 00:35:00.165 } 00:35:00.165 } 00:35:00.165 ]' 00:35:00.165 16:49:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:35:00.165 16:49:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:35:00.165 16:49:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:35:00.425 16:49:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:35:00.425 16:49:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:35:00.425 16:49:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:35:00.425 16:49:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:35:00.425 16:49:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 51e62053-fa92-4d27-bd4a-719e32fdf25f --l2p_dram_limit 10' 00:35:00.425 16:49:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:35:00.425 16:49:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:35:00.425 16:49:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:35:00.425 16:49:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 51e62053-fa92-4d27-bd4a-719e32fdf25f --l2p_dram_limit 10 -c nvc0n1p0 00:35:00.425 [2024-10-17 16:49:36.665734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.425 [2024-10-17 16:49:36.666000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:00.425 [2024-10-17 16:49:36.666031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:35:00.425 [2024-10-17 16:49:36.666042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.425 [2024-10-17 16:49:36.666133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.425 [2024-10-17 16:49:36.666149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:00.425 [2024-10-17 16:49:36.666163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:35:00.425 [2024-10-17 16:49:36.666174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.425 [2024-10-17 16:49:36.666205] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:00.425 [2024-10-17 16:49:36.667267] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:00.425 [2024-10-17 16:49:36.667302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.425 [2024-10-17 16:49:36.667313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:00.425 [2024-10-17 16:49:36.667330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:35:00.425 [2024-10-17 16:49:36.667340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.425 [2024-10-17 16:49:36.667466] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3b62d67c-a34f-4c56-8033-432fbb454aa3 00:35:00.425 [2024-10-17 16:49:36.669002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.425 [2024-10-17 16:49:36.669036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:35:00.425 [2024-10-17 16:49:36.669049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:35:00.425 [2024-10-17 16:49:36.669064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.425 [2024-10-17 16:49:36.676580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.425 [2024-10-17 16:49:36.676613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:00.425 [2024-10-17 16:49:36.676625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.474 ms 00:35:00.425 [2024-10-17 16:49:36.676638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.425 [2024-10-17 16:49:36.676752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.425 [2024-10-17 16:49:36.676770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:00.425 [2024-10-17 16:49:36.676781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:35:00.425 [2024-10-17 16:49:36.676799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.425 [2024-10-17 16:49:36.676873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.425 [2024-10-17 16:49:36.676888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:00.425 [2024-10-17 16:49:36.676899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:00.425 [2024-10-17 16:49:36.676912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.425 [2024-10-17 16:49:36.676938] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:00.425 [2024-10-17 16:49:36.681956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.425 [2024-10-17 16:49:36.681988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:00.425 [2024-10-17 16:49:36.682003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.030 ms 00:35:00.425 [2024-10-17 16:49:36.682033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.425 [2024-10-17 16:49:36.682071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.425 [2024-10-17 16:49:36.682082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:00.425 [2024-10-17 16:49:36.682095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:35:00.425 [2024-10-17 16:49:36.682105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.425 [2024-10-17 16:49:36.682153] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:35:00.425 [2024-10-17 16:49:36.682288] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:00.425 [2024-10-17 16:49:36.682309] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:00.425 [2024-10-17 16:49:36.682323] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:00.425 [2024-10-17 16:49:36.682339] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:00.425 [2024-10-17 16:49:36.682351] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:00.425 [2024-10-17 16:49:36.682365] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:00.425 [2024-10-17 16:49:36.682385] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:00.425 [2024-10-17 16:49:36.682397] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:00.425 [2024-10-17 16:49:36.682407] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:00.425 [2024-10-17 16:49:36.682420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.425 [2024-10-17 16:49:36.682433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:00.425 [2024-10-17 16:49:36.682445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:35:00.425 [2024-10-17 16:49:36.682465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.425 [2024-10-17 16:49:36.682541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.425 [2024-10-17 16:49:36.682552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:00.425 [2024-10-17 16:49:36.682564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:35:00.425 [2024-10-17 16:49:36.682573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.425 [2024-10-17 16:49:36.682659] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:00.425 [2024-10-17 16:49:36.682671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:00.425 [2024-10-17 16:49:36.682687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:00.425 [2024-10-17 16:49:36.682697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:00.425 [2024-10-17 16:49:36.682722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:00.425 [2024-10-17 16:49:36.682732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:00.426 [2024-10-17 16:49:36.682744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:00.426 [2024-10-17 16:49:36.682753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:00.426 [2024-10-17 16:49:36.682765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:00.426 [2024-10-17 16:49:36.682774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:00.426 [2024-10-17 16:49:36.682785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:00.426 [2024-10-17 16:49:36.682795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:00.426 [2024-10-17 16:49:36.682806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:00.426 [2024-10-17 16:49:36.682816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:00.426 [2024-10-17 16:49:36.682828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:00.426 [2024-10-17 16:49:36.682836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:00.426 [2024-10-17 16:49:36.682850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:00.426 [2024-10-17 16:49:36.682859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:00.426 [2024-10-17 16:49:36.682870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:00.426 [2024-10-17 16:49:36.682879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:00.426 [2024-10-17 16:49:36.682892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:00.426 [2024-10-17 16:49:36.682901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:00.426 [2024-10-17 16:49:36.682912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:00.426 [2024-10-17 16:49:36.682921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:00.426 [2024-10-17 16:49:36.682932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:00.426 [2024-10-17 16:49:36.682941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:00.426 [2024-10-17 16:49:36.682952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:00.426 [2024-10-17 16:49:36.682961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:00.426 [2024-10-17 16:49:36.682973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:00.426 [2024-10-17 16:49:36.682982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:00.426 [2024-10-17 16:49:36.682993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:00.426 [2024-10-17 16:49:36.683002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:00.426 [2024-10-17 16:49:36.683015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:00.426 [2024-10-17 16:49:36.683024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:00.426 [2024-10-17 16:49:36.683035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:00.426 [2024-10-17 16:49:36.683044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:00.426 [2024-10-17 16:49:36.683055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:00.426 [2024-10-17 16:49:36.683064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:00.426 [2024-10-17 16:49:36.683075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:00.426 [2024-10-17 16:49:36.683084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:00.426 [2024-10-17 16:49:36.683094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:00.426 [2024-10-17 16:49:36.683103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:00.426 [2024-10-17 16:49:36.683114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:00.426 [2024-10-17 16:49:36.683122] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:00.426 [2024-10-17 16:49:36.683135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:00.426 [2024-10-17 16:49:36.683146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:00.426 [2024-10-17 16:49:36.683158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:00.426 [2024-10-17 16:49:36.683177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:00.426 [2024-10-17 16:49:36.683191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:00.426 [2024-10-17 16:49:36.683200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:00.426 [2024-10-17 16:49:36.683212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:00.426 [2024-10-17 16:49:36.683221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:00.426 [2024-10-17 16:49:36.683232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:00.426 [2024-10-17 16:49:36.683245] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:00.426 [2024-10-17 16:49:36.683261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:00.426 [2024-10-17 16:49:36.683272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:00.426 [2024-10-17 16:49:36.683285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:00.426 [2024-10-17 16:49:36.683295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:00.426 [2024-10-17 16:49:36.683307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:00.426 [2024-10-17 16:49:36.683318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:00.426 [2024-10-17 16:49:36.683330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:00.426 [2024-10-17 16:49:36.683341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:00.426 [2024-10-17 16:49:36.683353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:00.426 [2024-10-17 16:49:36.683363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:00.426 [2024-10-17 16:49:36.683378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:00.426 [2024-10-17 16:49:36.683388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:00.426 [2024-10-17 16:49:36.683400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:00.426 [2024-10-17 16:49:36.683410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:00.426 [2024-10-17 16:49:36.683422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:00.426 [2024-10-17 16:49:36.683432] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:00.426 [2024-10-17 16:49:36.683447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:00.426 [2024-10-17 16:49:36.683460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:00.426 [2024-10-17 16:49:36.683472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:00.426 [2024-10-17 16:49:36.683482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:00.426 [2024-10-17 16:49:36.683495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:00.426 [2024-10-17 16:49:36.683505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:00.426 [2024-10-17 16:49:36.683518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:00.426 [2024-10-17 16:49:36.683535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:35:00.426 [2024-10-17 16:49:36.683547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:00.426 [2024-10-17 16:49:36.683589] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:35:00.426 [2024-10-17 16:49:36.683607] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:35:03.713 [2024-10-17 16:49:40.004101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.713 [2024-10-17 16:49:40.004361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:35:03.713 [2024-10-17 16:49:40.004516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3325.886 ms 00:35:03.713 [2024-10-17 16:49:40.004568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.973 [2024-10-17 16:49:40.044672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.973 [2024-10-17 16:49:40.044937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:03.973 [2024-10-17 16:49:40.045030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.699 ms 00:35:03.973 [2024-10-17 16:49:40.045071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.973 [2024-10-17 16:49:40.045253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.973 [2024-10-17 16:49:40.045358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:03.973 [2024-10-17 16:49:40.045397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:35:03.973 [2024-10-17 16:49:40.045433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.973 [2024-10-17 16:49:40.092534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.973 [2024-10-17 16:49:40.092715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:03.973 [2024-10-17 16:49:40.092803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.029 ms 00:35:03.973 [2024-10-17 16:49:40.092846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.973 [2024-10-17 16:49:40.092922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.973 [2024-10-17 16:49:40.092961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:03.973 [2024-10-17 16:49:40.092993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:03.973 [2024-10-17 16:49:40.093087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.973 [2024-10-17 16:49:40.093621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.973 [2024-10-17 16:49:40.093759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:03.973 [2024-10-17 16:49:40.093844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:35:03.973 [2024-10-17 16:49:40.093884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.973 [2024-10-17 16:49:40.094065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.973 [2024-10-17 16:49:40.094108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:03.973 [2024-10-17 16:49:40.094182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:35:03.973 [2024-10-17 16:49:40.094222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.973 [2024-10-17 16:49:40.114770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.973 [2024-10-17 16:49:40.114909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:03.973 [2024-10-17 16:49:40.114990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.531 ms 00:35:03.973 [2024-10-17 16:49:40.115034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.973 [2024-10-17 16:49:40.127812] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:03.973 [2024-10-17 16:49:40.131129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.973 [2024-10-17 16:49:40.131249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:03.973 [2024-10-17 16:49:40.131325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.000 ms 00:35:03.973 [2024-10-17 16:49:40.131362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.973 [2024-10-17 16:49:40.226516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.973 [2024-10-17 16:49:40.226712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:35:03.973 [2024-10-17 16:49:40.226804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.250 ms 00:35:03.973 [2024-10-17 16:49:40.226841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.973 [2024-10-17 16:49:40.227056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.973 [2024-10-17 16:49:40.227188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:03.973 [2024-10-17 16:49:40.227277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:35:03.973 [2024-10-17 16:49:40.227311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.973 [2024-10-17 16:49:40.263458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.973 [2024-10-17 16:49:40.263606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:35:03.973 [2024-10-17 16:49:40.263688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.124 ms 00:35:03.973 [2024-10-17 16:49:40.263741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.233 [2024-10-17 16:49:40.300287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.233 [2024-10-17 16:49:40.300444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:35:04.233 [2024-10-17 16:49:40.300474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.535 ms 00:35:04.233 [2024-10-17 16:49:40.300485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.233 [2024-10-17 16:49:40.301331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.233 [2024-10-17 16:49:40.301356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:04.233 [2024-10-17 16:49:40.301370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:35:04.233 [2024-10-17 16:49:40.301381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.233 [2024-10-17 16:49:40.400722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.233 [2024-10-17 16:49:40.400783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:35:04.233 [2024-10-17 16:49:40.400808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.435 ms 00:35:04.233 [2024-10-17 16:49:40.400819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.233 [2024-10-17 16:49:40.438149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.233 [2024-10-17 16:49:40.438206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:35:04.233 [2024-10-17 16:49:40.438229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.294 ms 00:35:04.233 [2024-10-17 16:49:40.438239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.233 [2024-10-17 16:49:40.475175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.233 [2024-10-17 16:49:40.475230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:35:04.233 [2024-10-17 16:49:40.475248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.945 ms 00:35:04.233 [2024-10-17 16:49:40.475259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.233 [2024-10-17 16:49:40.512305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.233 [2024-10-17 16:49:40.512353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:04.233 [2024-10-17 16:49:40.512371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.057 ms 00:35:04.233 [2024-10-17 16:49:40.512382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.233 [2024-10-17 16:49:40.512441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.233 [2024-10-17 16:49:40.512453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:04.233 [2024-10-17 16:49:40.512470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:35:04.233 [2024-10-17 16:49:40.512480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.233 [2024-10-17 16:49:40.512586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:04.233 [2024-10-17 16:49:40.512598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:04.233 [2024-10-17 16:49:40.512612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:35:04.233 [2024-10-17 16:49:40.512622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:04.233 [2024-10-17 16:49:40.513814] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3853.899 ms, result 0 00:35:04.233 { 00:35:04.233 "name": "ftl0", 00:35:04.233 "uuid": "3b62d67c-a34f-4c56-8033-432fbb454aa3" 00:35:04.233 } 00:35:04.491 16:49:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:35:04.491 16:49:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:35:04.750 16:49:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:35:04.750 16:49:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:35:04.750 16:49:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:35:04.750 /dev/nbd0 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:35:04.750 1+0 records in 00:35:04.750 1+0 records out 00:35:04.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222053 s, 18.4 MB/s 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:35:04.750 16:49:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:35:05.009 [2024-10-17 16:49:41.123770] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:35:05.009 [2024-10-17 16:49:41.123882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78341 ] 00:35:05.009 [2024-10-17 16:49:41.294206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.268 [2024-10-17 16:49:41.412015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:06.643  [2024-10-17T16:49:43.877Z] Copying: 202/1024 [MB] (202 MBps) [2024-10-17T16:49:44.813Z] Copying: 406/1024 [MB] (203 MBps) [2024-10-17T16:49:45.748Z] Copying: 610/1024 [MB] (203 MBps) [2024-10-17T16:49:47.123Z] Copying: 812/1024 [MB] (201 MBps) [2024-10-17T16:49:47.123Z] Copying: 1005/1024 [MB] (193 MBps) [2024-10-17T16:49:48.060Z] Copying: 1024/1024 [MB] (average 201 MBps) 00:35:11.761 00:35:11.761 16:49:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:13.665 16:49:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:35:13.665 [2024-10-17 16:49:49.741647] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:35:13.665 [2024-10-17 16:49:49.741797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78434 ] 00:35:13.665 [2024-10-17 16:49:49.910646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:13.923 [2024-10-17 16:49:50.026431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.299  [2024-10-17T16:49:52.533Z] Copying: 15/1024 [MB] (15 MBps) [2024-10-17T16:49:53.467Z] Copying: 29/1024 [MB] (13 MBps) [2024-10-17T16:49:54.402Z] Copying: 45/1024 [MB] (16 MBps) [2024-10-17T16:49:55.780Z] Copying: 62/1024 [MB] (16 MBps) [2024-10-17T16:49:56.345Z] Copying: 78/1024 [MB] (16 MBps) [2024-10-17T16:49:57.723Z] Copying: 95/1024 [MB] (16 MBps) [2024-10-17T16:49:58.659Z] Copying: 111/1024 [MB] (16 MBps) [2024-10-17T16:49:59.596Z] Copying: 128/1024 [MB] (17 MBps) [2024-10-17T16:50:00.533Z] Copying: 147/1024 [MB] (18 MBps) [2024-10-17T16:50:01.467Z] Copying: 165/1024 [MB] (18 MBps) [2024-10-17T16:50:02.402Z] Copying: 184/1024 [MB] (18 MBps) [2024-10-17T16:50:03.336Z] Copying: 201/1024 [MB] (17 MBps) [2024-10-17T16:50:04.713Z] Copying: 219/1024 [MB] (17 MBps) [2024-10-17T16:50:05.650Z] Copying: 237/1024 [MB] (17 MBps) [2024-10-17T16:50:06.585Z] Copying: 255/1024 [MB] (17 MBps) [2024-10-17T16:50:07.524Z] Copying: 272/1024 [MB] (17 MBps) [2024-10-17T16:50:08.461Z] Copying: 290/1024 [MB] (17 MBps) [2024-10-17T16:50:09.399Z] Copying: 308/1024 [MB] (17 MBps) [2024-10-17T16:50:10.331Z] Copying: 326/1024 [MB] (18 MBps) [2024-10-17T16:50:11.703Z] Copying: 344/1024 [MB] (18 MBps) [2024-10-17T16:50:12.639Z] Copying: 362/1024 [MB] (18 MBps) [2024-10-17T16:50:13.574Z] Copying: 381/1024 [MB] (18 MBps) [2024-10-17T16:50:14.509Z] Copying: 399/1024 [MB] (17 MBps) [2024-10-17T16:50:15.446Z] Copying: 417/1024 [MB] (17 MBps) [2024-10-17T16:50:16.382Z] Copying: 435/1024 [MB] (18 MBps) [2024-10-17T16:50:17.317Z] Copying: 453/1024 [MB] (18 MBps) [2024-10-17T16:50:18.693Z] Copying: 472/1024 [MB] (18 MBps) [2024-10-17T16:50:19.630Z] Copying: 490/1024 [MB] (18 MBps) [2024-10-17T16:50:20.565Z] Copying: 508/1024 [MB] (18 MBps) [2024-10-17T16:50:21.501Z] Copying: 526/1024 [MB] (18 MBps) [2024-10-17T16:50:22.436Z] Copying: 544/1024 [MB] (17 MBps) [2024-10-17T16:50:23.402Z] Copying: 562/1024 [MB] (17 MBps) [2024-10-17T16:50:24.359Z] Copying: 580/1024 [MB] (18 MBps) [2024-10-17T16:50:25.294Z] Copying: 598/1024 [MB] (18 MBps) [2024-10-17T16:50:26.670Z] Copying: 616/1024 [MB] (18 MBps) [2024-10-17T16:50:27.605Z] Copying: 635/1024 [MB] (18 MBps) [2024-10-17T16:50:28.540Z] Copying: 654/1024 [MB] (18 MBps) [2024-10-17T16:50:29.477Z] Copying: 672/1024 [MB] (18 MBps) [2024-10-17T16:50:30.413Z] Copying: 690/1024 [MB] (18 MBps) [2024-10-17T16:50:31.349Z] Copying: 708/1024 [MB] (17 MBps) [2024-10-17T16:50:32.312Z] Copying: 727/1024 [MB] (18 MBps) [2024-10-17T16:50:33.689Z] Copying: 745/1024 [MB] (18 MBps) [2024-10-17T16:50:34.623Z] Copying: 763/1024 [MB] (18 MBps) [2024-10-17T16:50:35.559Z] Copying: 782/1024 [MB] (18 MBps) [2024-10-17T16:50:36.494Z] Copying: 800/1024 [MB] (18 MBps) [2024-10-17T16:50:37.429Z] Copying: 818/1024 [MB] (18 MBps) [2024-10-17T16:50:38.365Z] Copying: 836/1024 [MB] (18 MBps) [2024-10-17T16:50:39.301Z] Copying: 855/1024 [MB] (18 MBps) [2024-10-17T16:50:40.677Z] Copying: 873/1024 [MB] (18 MBps) [2024-10-17T16:50:41.614Z] Copying: 891/1024 [MB] (18 MBps) [2024-10-17T16:50:42.548Z] Copying: 909/1024 [MB] (18 MBps) [2024-10-17T16:50:43.480Z] Copying: 927/1024 [MB] (18 MBps) [2024-10-17T16:50:44.415Z] Copying: 945/1024 [MB] (18 MBps) [2024-10-17T16:50:45.350Z] Copying: 963/1024 [MB] (18 MBps) [2024-10-17T16:50:46.285Z] Copying: 981/1024 [MB] (17 MBps) [2024-10-17T16:50:47.661Z] Copying: 999/1024 [MB] (17 MBps) [2024-10-17T16:50:47.661Z] Copying: 1017/1024 [MB] (18 MBps) [2024-10-17T16:50:49.040Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:36:12.741 00:36:12.741 16:50:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:36:12.741 16:50:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:36:12.741 16:50:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:36:12.999 [2024-10-17 16:50:49.182750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:12.999 [2024-10-17 16:50:49.182815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:12.999 [2024-10-17 16:50:49.182848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:36:12.999 [2024-10-17 16:50:49.182861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.999 [2024-10-17 16:50:49.182908] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:12.999 [2024-10-17 16:50:49.187066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:12.999 [2024-10-17 16:50:49.187104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:12.999 [2024-10-17 16:50:49.187119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.139 ms 00:36:12.999 [2024-10-17 16:50:49.187129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.999 [2024-10-17 16:50:49.189154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:12.999 [2024-10-17 16:50:49.189194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:12.999 [2024-10-17 16:50:49.189210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.992 ms 00:36:12.999 [2024-10-17 16:50:49.189221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.999 [2024-10-17 16:50:49.206844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:12.999 [2024-10-17 16:50:49.206882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:36:12.999 [2024-10-17 16:50:49.206899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.626 ms 00:36:12.999 [2024-10-17 16:50:49.206912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.999 [2024-10-17 16:50:49.211960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:12.999 [2024-10-17 16:50:49.211993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:36:12.999 [2024-10-17 16:50:49.212008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.014 ms 00:36:12.999 [2024-10-17 16:50:49.212018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.999 [2024-10-17 16:50:49.248796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:12.999 [2024-10-17 16:50:49.248831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:36:12.999 [2024-10-17 16:50:49.248864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.764 ms 00:36:12.999 [2024-10-17 16:50:49.248875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.999 [2024-10-17 16:50:49.270278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:12.999 [2024-10-17 16:50:49.270463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:36:12.999 [2024-10-17 16:50:49.270500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.384 ms 00:36:12.999 [2024-10-17 16:50:49.270520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:12.999 [2024-10-17 16:50:49.270792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:12.999 [2024-10-17 16:50:49.270820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:36:12.999 [2024-10-17 16:50:49.270842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:36:12.999 [2024-10-17 16:50:49.270855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.259 [2024-10-17 16:50:49.306012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:13.259 [2024-10-17 16:50:49.306049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:36:13.259 [2024-10-17 16:50:49.306066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.181 ms 00:36:13.259 [2024-10-17 16:50:49.306076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.259 [2024-10-17 16:50:49.342068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:13.259 [2024-10-17 16:50:49.342103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:36:13.259 [2024-10-17 16:50:49.342126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.004 ms 00:36:13.259 [2024-10-17 16:50:49.342137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.259 [2024-10-17 16:50:49.377874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:13.259 [2024-10-17 16:50:49.378014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:36:13.260 [2024-10-17 16:50:49.378039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.747 ms 00:36:13.260 [2024-10-17 16:50:49.378049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.260 [2024-10-17 16:50:49.413401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:13.260 [2024-10-17 16:50:49.413434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:36:13.260 [2024-10-17 16:50:49.413450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.246 ms 00:36:13.260 [2024-10-17 16:50:49.413459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.260 [2024-10-17 16:50:49.413519] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:13.260 [2024-10-17 16:50:49.413539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.413991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:13.260 [2024-10-17 16:50:49.414598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:13.261 [2024-10-17 16:50:49.414793] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:13.261 [2024-10-17 16:50:49.414805] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3b62d67c-a34f-4c56-8033-432fbb454aa3 00:36:13.261 [2024-10-17 16:50:49.414816] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:36:13.261 [2024-10-17 16:50:49.414830] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:36:13.261 [2024-10-17 16:50:49.414840] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:36:13.261 [2024-10-17 16:50:49.414852] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:36:13.261 [2024-10-17 16:50:49.414862] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:13.261 [2024-10-17 16:50:49.414875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:13.261 [2024-10-17 16:50:49.414888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:13.261 [2024-10-17 16:50:49.414899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:13.261 [2024-10-17 16:50:49.414908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:13.261 [2024-10-17 16:50:49.414921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:13.261 [2024-10-17 16:50:49.414931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:13.261 [2024-10-17 16:50:49.414944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:36:13.261 [2024-10-17 16:50:49.414954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.261 [2024-10-17 16:50:49.434594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:13.261 [2024-10-17 16:50:49.434626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:13.261 [2024-10-17 16:50:49.434642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.615 ms 00:36:13.261 [2024-10-17 16:50:49.434652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.261 [2024-10-17 16:50:49.435214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:13.261 [2024-10-17 16:50:49.435238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:13.261 [2024-10-17 16:50:49.435252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:36:13.261 [2024-10-17 16:50:49.435262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.261 [2024-10-17 16:50:49.500738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:13.261 [2024-10-17 16:50:49.500776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:13.261 [2024-10-17 16:50:49.500799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:13.261 [2024-10-17 16:50:49.500817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.261 [2024-10-17 16:50:49.500903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:13.261 [2024-10-17 16:50:49.500921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:13.261 [2024-10-17 16:50:49.500940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:13.261 [2024-10-17 16:50:49.500957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.261 [2024-10-17 16:50:49.501084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:13.261 [2024-10-17 16:50:49.501103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:13.261 [2024-10-17 16:50:49.501126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:13.261 [2024-10-17 16:50:49.501141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.261 [2024-10-17 16:50:49.501180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:13.261 [2024-10-17 16:50:49.501197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:13.261 [2024-10-17 16:50:49.501213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:13.261 [2024-10-17 16:50:49.501230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.521 [2024-10-17 16:50:49.625737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:13.521 [2024-10-17 16:50:49.625978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:13.521 [2024-10-17 16:50:49.626009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:13.521 [2024-10-17 16:50:49.626023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.521 [2024-10-17 16:50:49.725309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:13.521 [2024-10-17 16:50:49.725365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:13.521 [2024-10-17 16:50:49.725382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:13.521 [2024-10-17 16:50:49.725409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.521 [2024-10-17 16:50:49.725530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:13.521 [2024-10-17 16:50:49.725543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:13.521 [2024-10-17 16:50:49.725557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:13.521 [2024-10-17 16:50:49.725567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.521 [2024-10-17 16:50:49.725633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:13.521 [2024-10-17 16:50:49.725648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:13.521 [2024-10-17 16:50:49.725661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:13.521 [2024-10-17 16:50:49.725671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.521 [2024-10-17 16:50:49.725830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:13.521 [2024-10-17 16:50:49.725855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:13.521 [2024-10-17 16:50:49.725873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:13.521 [2024-10-17 16:50:49.725900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.521 [2024-10-17 16:50:49.725947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:13.521 [2024-10-17 16:50:49.725960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:13.521 [2024-10-17 16:50:49.725976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:13.521 [2024-10-17 16:50:49.725986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.521 [2024-10-17 16:50:49.726055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:13.521 [2024-10-17 16:50:49.726071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:13.521 [2024-10-17 16:50:49.726084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:13.521 [2024-10-17 16:50:49.726094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.521 [2024-10-17 16:50:49.726148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:13.521 [2024-10-17 16:50:49.726163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:13.521 [2024-10-17 16:50:49.726176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:13.521 [2024-10-17 16:50:49.726186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:13.521 [2024-10-17 16:50:49.726333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 544.428 ms, result 0 00:36:13.521 true 00:36:13.521 16:50:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78193 00:36:13.521 16:50:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78193 00:36:13.521 16:50:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:36:13.780 [2024-10-17 16:50:49.858749] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:36:13.780 [2024-10-17 16:50:49.858871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79041 ] 00:36:13.780 [2024-10-17 16:50:50.041548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.037 [2024-10-17 16:50:50.157095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.412  [2024-10-17T16:50:52.647Z] Copying: 202/1024 [MB] (202 MBps) [2024-10-17T16:50:53.584Z] Copying: 405/1024 [MB] (203 MBps) [2024-10-17T16:50:54.520Z] Copying: 610/1024 [MB] (205 MBps) [2024-10-17T16:50:55.895Z] Copying: 814/1024 [MB] (203 MBps) [2024-10-17T16:50:55.895Z] Copying: 1015/1024 [MB] (201 MBps) [2024-10-17T16:50:56.833Z] Copying: 1024/1024 [MB] (average 203 MBps) 00:36:20.534 00:36:20.534 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78193 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:36:20.535 16:50:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:20.535 [2024-10-17 16:50:56.719099] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:36:20.535 [2024-10-17 16:50:56.719318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79116 ] 00:36:20.793 [2024-10-17 16:50:56.892519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.793 [2024-10-17 16:50:57.009239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:21.361 [2024-10-17 16:50:57.359527] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:21.361 [2024-10-17 16:50:57.359594] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:36:21.361 [2024-10-17 16:50:57.425993] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:36:21.361 [2024-10-17 16:50:57.426516] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:36:21.361 [2024-10-17 16:50:57.426747] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:36:21.621 [2024-10-17 16:50:57.731664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.621 [2024-10-17 16:50:57.731731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:36:21.621 [2024-10-17 16:50:57.731748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:21.621 [2024-10-17 16:50:57.731759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.621 [2024-10-17 16:50:57.731813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.621 [2024-10-17 16:50:57.731851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:21.621 [2024-10-17 16:50:57.731863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:36:21.621 [2024-10-17 16:50:57.731874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.621 [2024-10-17 16:50:57.731895] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:36:21.621 [2024-10-17 16:50:57.732839] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:36:21.621 [2024-10-17 16:50:57.732868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.621 [2024-10-17 16:50:57.732879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:21.621 [2024-10-17 16:50:57.732891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:36:21.621 [2024-10-17 16:50:57.732901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.621 [2024-10-17 16:50:57.734432] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:36:21.621 [2024-10-17 16:50:57.754340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.621 [2024-10-17 16:50:57.754379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:36:21.621 [2024-10-17 16:50:57.754401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.941 ms 00:36:21.621 [2024-10-17 16:50:57.754412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.621 [2024-10-17 16:50:57.754476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.621 [2024-10-17 16:50:57.754490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:36:21.621 [2024-10-17 16:50:57.754501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:36:21.621 [2024-10-17 16:50:57.754511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.621 [2024-10-17 16:50:57.761332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.621 [2024-10-17 16:50:57.761499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:21.621 [2024-10-17 16:50:57.761520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.758 ms 00:36:21.622 [2024-10-17 16:50:57.761532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.622 [2024-10-17 16:50:57.761631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.622 [2024-10-17 16:50:57.761646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:21.622 [2024-10-17 16:50:57.761658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:36:21.622 [2024-10-17 16:50:57.761668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.622 [2024-10-17 16:50:57.761729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.622 [2024-10-17 16:50:57.761743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:36:21.622 [2024-10-17 16:50:57.761757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:36:21.622 [2024-10-17 16:50:57.761767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.622 [2024-10-17 16:50:57.761793] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:36:21.622 [2024-10-17 16:50:57.766596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.622 [2024-10-17 16:50:57.766628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:21.622 [2024-10-17 16:50:57.766641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.818 ms 00:36:21.622 [2024-10-17 16:50:57.766652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.622 [2024-10-17 16:50:57.766682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.622 [2024-10-17 16:50:57.766692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:36:21.622 [2024-10-17 16:50:57.766715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:36:21.622 [2024-10-17 16:50:57.766726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.622 [2024-10-17 16:50:57.766777] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:36:21.622 [2024-10-17 16:50:57.766818] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:36:21.622 [2024-10-17 16:50:57.766860] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:36:21.622 [2024-10-17 16:50:57.766879] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:36:21.622 [2024-10-17 16:50:57.766969] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:36:21.622 [2024-10-17 16:50:57.766982] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:36:21.622 [2024-10-17 16:50:57.766995] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:36:21.622 [2024-10-17 16:50:57.767008] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:36:21.622 [2024-10-17 16:50:57.767021] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:36:21.622 [2024-10-17 16:50:57.767036] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:36:21.622 [2024-10-17 16:50:57.767046] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:36:21.622 [2024-10-17 16:50:57.767056] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:36:21.622 [2024-10-17 16:50:57.767066] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:36:21.622 [2024-10-17 16:50:57.767077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.622 [2024-10-17 16:50:57.767087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:36:21.622 [2024-10-17 16:50:57.767098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:36:21.622 [2024-10-17 16:50:57.767108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.622 [2024-10-17 16:50:57.767184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.622 [2024-10-17 16:50:57.767195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:36:21.622 [2024-10-17 16:50:57.767206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:36:21.622 [2024-10-17 16:50:57.767219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.622 [2024-10-17 16:50:57.767311] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:36:21.622 [2024-10-17 16:50:57.767325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:36:21.622 [2024-10-17 16:50:57.767336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:21.622 [2024-10-17 16:50:57.767347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:21.622 [2024-10-17 16:50:57.767357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:36:21.622 [2024-10-17 16:50:57.767366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:36:21.622 [2024-10-17 16:50:57.767376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:36:21.622 [2024-10-17 16:50:57.767386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:36:21.622 [2024-10-17 16:50:57.767397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:36:21.622 [2024-10-17 16:50:57.767407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:21.622 [2024-10-17 16:50:57.767417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:36:21.622 [2024-10-17 16:50:57.767435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:36:21.622 [2024-10-17 16:50:57.767445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:36:21.622 [2024-10-17 16:50:57.767455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:36:21.622 [2024-10-17 16:50:57.767465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:36:21.622 [2024-10-17 16:50:57.767475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:21.622 [2024-10-17 16:50:57.767484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:36:21.622 [2024-10-17 16:50:57.767494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:36:21.622 [2024-10-17 16:50:57.767504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:21.622 [2024-10-17 16:50:57.767513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:36:21.622 [2024-10-17 16:50:57.767523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:36:21.622 [2024-10-17 16:50:57.767532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:21.622 [2024-10-17 16:50:57.767541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:36:21.622 [2024-10-17 16:50:57.767551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:36:21.622 [2024-10-17 16:50:57.767560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:21.622 [2024-10-17 16:50:57.767570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:36:21.622 [2024-10-17 16:50:57.767579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:36:21.622 [2024-10-17 16:50:57.767588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:21.622 [2024-10-17 16:50:57.767597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:36:21.622 [2024-10-17 16:50:57.767607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:36:21.622 [2024-10-17 16:50:57.767616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:36:21.622 [2024-10-17 16:50:57.767625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:36:21.622 [2024-10-17 16:50:57.767634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:36:21.622 [2024-10-17 16:50:57.767644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:21.622 [2024-10-17 16:50:57.767653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:36:21.622 [2024-10-17 16:50:57.767662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:36:21.622 [2024-10-17 16:50:57.767671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:36:21.622 [2024-10-17 16:50:57.767680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:36:21.622 [2024-10-17 16:50:57.767689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:36:21.622 [2024-10-17 16:50:57.767710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:21.622 [2024-10-17 16:50:57.767720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:36:21.622 [2024-10-17 16:50:57.767734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:36:21.622 [2024-10-17 16:50:57.767743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:21.622 [2024-10-17 16:50:57.767753] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:36:21.622 [2024-10-17 16:50:57.767763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:36:21.622 [2024-10-17 16:50:57.767773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:36:21.622 [2024-10-17 16:50:57.767783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:36:21.622 [2024-10-17 16:50:57.767796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:36:21.622 [2024-10-17 16:50:57.767805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:36:21.623 [2024-10-17 16:50:57.767815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:36:21.623 [2024-10-17 16:50:57.767824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:36:21.623 [2024-10-17 16:50:57.767833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:36:21.623 [2024-10-17 16:50:57.767842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:36:21.623 [2024-10-17 16:50:57.767853] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:36:21.623 [2024-10-17 16:50:57.767865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:21.623 [2024-10-17 16:50:57.767877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:36:21.623 [2024-10-17 16:50:57.767887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:36:21.623 [2024-10-17 16:50:57.767898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:36:21.623 [2024-10-17 16:50:57.767908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:36:21.623 [2024-10-17 16:50:57.767918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:36:21.623 [2024-10-17 16:50:57.767928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:36:21.623 [2024-10-17 16:50:57.767939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:36:21.623 [2024-10-17 16:50:57.767949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:36:21.623 [2024-10-17 16:50:57.767960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:36:21.623 [2024-10-17 16:50:57.767970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:36:21.623 [2024-10-17 16:50:57.767980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:36:21.623 [2024-10-17 16:50:57.767990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:36:21.623 [2024-10-17 16:50:57.768000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:36:21.623 [2024-10-17 16:50:57.768010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:36:21.623 [2024-10-17 16:50:57.768020] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:36:21.623 [2024-10-17 16:50:57.768032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:21.623 [2024-10-17 16:50:57.768042] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:21.623 [2024-10-17 16:50:57.768053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:36:21.623 [2024-10-17 16:50:57.768065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:36:21.623 [2024-10-17 16:50:57.768075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:36:21.623 [2024-10-17 16:50:57.768086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.623 [2024-10-17 16:50:57.768096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:36:21.623 [2024-10-17 16:50:57.768106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:36:21.623 [2024-10-17 16:50:57.768116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.623 [2024-10-17 16:50:57.807460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.623 [2024-10-17 16:50:57.807506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:21.623 [2024-10-17 16:50:57.807521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.355 ms 00:36:21.623 [2024-10-17 16:50:57.807533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.623 [2024-10-17 16:50:57.807613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.623 [2024-10-17 16:50:57.807625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:36:21.623 [2024-10-17 16:50:57.807640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:36:21.623 [2024-10-17 16:50:57.807651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.623 [2024-10-17 16:50:57.862030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.623 [2024-10-17 16:50:57.862070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:21.623 [2024-10-17 16:50:57.862084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.378 ms 00:36:21.623 [2024-10-17 16:50:57.862095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.623 [2024-10-17 16:50:57.862139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.623 [2024-10-17 16:50:57.862151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:21.623 [2024-10-17 16:50:57.862162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:36:21.623 [2024-10-17 16:50:57.862173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.623 [2024-10-17 16:50:57.862660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.623 [2024-10-17 16:50:57.862674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:21.623 [2024-10-17 16:50:57.862686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:36:21.623 [2024-10-17 16:50:57.862716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.623 [2024-10-17 16:50:57.862837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.623 [2024-10-17 16:50:57.862854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:21.623 [2024-10-17 16:50:57.862866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:36:21.623 [2024-10-17 16:50:57.862876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.623 [2024-10-17 16:50:57.881815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.623 [2024-10-17 16:50:57.881854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:21.623 [2024-10-17 16:50:57.881869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.948 ms 00:36:21.623 [2024-10-17 16:50:57.881880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.623 [2024-10-17 16:50:57.901189] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:36:21.623 [2024-10-17 16:50:57.901227] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:36:21.623 [2024-10-17 16:50:57.901243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.623 [2024-10-17 16:50:57.901254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:36:21.623 [2024-10-17 16:50:57.901266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.281 ms 00:36:21.623 [2024-10-17 16:50:57.901277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.883 [2024-10-17 16:50:57.930941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.883 [2024-10-17 16:50:57.930985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:36:21.883 [2024-10-17 16:50:57.931012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.666 ms 00:36:21.883 [2024-10-17 16:50:57.931023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.883 [2024-10-17 16:50:57.949431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.883 [2024-10-17 16:50:57.949471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:36:21.883 [2024-10-17 16:50:57.949485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.389 ms 00:36:21.883 [2024-10-17 16:50:57.949495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.883 [2024-10-17 16:50:57.967208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.883 [2024-10-17 16:50:57.967246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:36:21.883 [2024-10-17 16:50:57.967262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.702 ms 00:36:21.883 [2024-10-17 16:50:57.967277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.883 [2024-10-17 16:50:57.968065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.883 [2024-10-17 16:50:57.968089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:36:21.883 [2024-10-17 16:50:57.968102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:36:21.883 [2024-10-17 16:50:57.968113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.884 [2024-10-17 16:50:58.053052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.884 [2024-10-17 16:50:58.053122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:36:21.884 [2024-10-17 16:50:58.053142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.055 ms 00:36:21.884 [2024-10-17 16:50:58.053153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.884 [2024-10-17 16:50:58.064032] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:36:21.884 [2024-10-17 16:50:58.067074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.884 [2024-10-17 16:50:58.067108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:36:21.884 [2024-10-17 16:50:58.067123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.888 ms 00:36:21.884 [2024-10-17 16:50:58.067136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.884 [2024-10-17 16:50:58.067240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.884 [2024-10-17 16:50:58.067258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:36:21.884 [2024-10-17 16:50:58.067270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:21.884 [2024-10-17 16:50:58.067280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.884 [2024-10-17 16:50:58.067387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.884 [2024-10-17 16:50:58.067405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:36:21.884 [2024-10-17 16:50:58.067416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:36:21.884 [2024-10-17 16:50:58.067426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.884 [2024-10-17 16:50:58.067454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.884 [2024-10-17 16:50:58.067465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:36:21.884 [2024-10-17 16:50:58.067481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:21.884 [2024-10-17 16:50:58.067491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.884 [2024-10-17 16:50:58.067525] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:36:21.884 [2024-10-17 16:50:58.067537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.884 [2024-10-17 16:50:58.067548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:36:21.884 [2024-10-17 16:50:58.067558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:36:21.884 [2024-10-17 16:50:58.067569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.884 [2024-10-17 16:50:58.104609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.884 [2024-10-17 16:50:58.104765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:36:21.884 [2024-10-17 16:50:58.104846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.080 ms 00:36:21.884 [2024-10-17 16:50:58.104884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.884 [2024-10-17 16:50:58.104977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:21.884 [2024-10-17 16:50:58.105053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:36:21.884 [2024-10-17 16:50:58.105092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:36:21.884 [2024-10-17 16:50:58.105123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:21.884 [2024-10-17 16:50:58.106224] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.736 ms, result 0 00:36:23.261  [2024-10-17T16:51:00.127Z] Copying: 26/1024 [MB] (26 MBps) [2024-10-17T16:51:01.504Z] Copying: 53/1024 [MB] (27 MBps) [2024-10-17T16:51:02.442Z] Copying: 80/1024 [MB] (26 MBps) [2024-10-17T16:51:03.387Z] Copying: 106/1024 [MB] (26 MBps) [2024-10-17T16:51:04.321Z] Copying: 135/1024 [MB] (28 MBps) [2024-10-17T16:51:05.258Z] Copying: 163/1024 [MB] (28 MBps) [2024-10-17T16:51:06.194Z] Copying: 192/1024 [MB] (28 MBps) [2024-10-17T16:51:07.132Z] Copying: 222/1024 [MB] (29 MBps) [2024-10-17T16:51:08.513Z] Copying: 249/1024 [MB] (27 MBps) [2024-10-17T16:51:09.449Z] Copying: 276/1024 [MB] (27 MBps) [2024-10-17T16:51:10.383Z] Copying: 304/1024 [MB] (27 MBps) [2024-10-17T16:51:11.318Z] Copying: 330/1024 [MB] (26 MBps) [2024-10-17T16:51:12.254Z] Copying: 357/1024 [MB] (26 MBps) [2024-10-17T16:51:13.188Z] Copying: 384/1024 [MB] (27 MBps) [2024-10-17T16:51:14.123Z] Copying: 412/1024 [MB] (27 MBps) [2024-10-17T16:51:15.499Z] Copying: 439/1024 [MB] (27 MBps) [2024-10-17T16:51:16.434Z] Copying: 466/1024 [MB] (27 MBps) [2024-10-17T16:51:17.439Z] Copying: 495/1024 [MB] (28 MBps) [2024-10-17T16:51:18.373Z] Copying: 522/1024 [MB] (27 MBps) [2024-10-17T16:51:19.309Z] Copying: 549/1024 [MB] (27 MBps) [2024-10-17T16:51:20.242Z] Copying: 576/1024 [MB] (27 MBps) [2024-10-17T16:51:21.177Z] Copying: 602/1024 [MB] (26 MBps) [2024-10-17T16:51:22.111Z] Copying: 630/1024 [MB] (27 MBps) [2024-10-17T16:51:23.487Z] Copying: 657/1024 [MB] (26 MBps) [2024-10-17T16:51:24.419Z] Copying: 683/1024 [MB] (26 MBps) [2024-10-17T16:51:25.361Z] Copying: 710/1024 [MB] (26 MBps) [2024-10-17T16:51:26.296Z] Copying: 736/1024 [MB] (25 MBps) [2024-10-17T16:51:27.232Z] Copying: 762/1024 [MB] (26 MBps) [2024-10-17T16:51:28.168Z] Copying: 788/1024 [MB] (25 MBps) [2024-10-17T16:51:29.106Z] Copying: 815/1024 [MB] (26 MBps) [2024-10-17T16:51:30.481Z] Copying: 842/1024 [MB] (27 MBps) [2024-10-17T16:51:31.418Z] Copying: 869/1024 [MB] (27 MBps) [2024-10-17T16:51:32.383Z] Copying: 897/1024 [MB] (27 MBps) [2024-10-17T16:51:33.321Z] Copying: 925/1024 [MB] (27 MBps) [2024-10-17T16:51:34.258Z] Copying: 952/1024 [MB] (27 MBps) [2024-10-17T16:51:35.195Z] Copying: 980/1024 [MB] (28 MBps) [2024-10-17T16:51:36.131Z] Copying: 1009/1024 [MB] (28 MBps) [2024-10-17T16:51:36.389Z] Copying: 1023/1024 [MB] (14 MBps) [2024-10-17T16:51:36.389Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-17 16:51:36.291632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.090 [2024-10-17 16:51:36.291714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:00.090 [2024-10-17 16:51:36.291734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:37:00.090 [2024-10-17 16:51:36.291746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.090 [2024-10-17 16:51:36.291860] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:00.090 [2024-10-17 16:51:36.296315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.090 [2024-10-17 16:51:36.296455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:00.090 [2024-10-17 16:51:36.296598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.437 ms 00:37:00.090 [2024-10-17 16:51:36.296638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.090 [2024-10-17 16:51:36.305275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.090 [2024-10-17 16:51:36.305428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:00.090 [2024-10-17 16:51:36.305512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.556 ms 00:37:00.090 [2024-10-17 16:51:36.305549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.090 [2024-10-17 16:51:36.329101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.090 [2024-10-17 16:51:36.329250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:37:00.090 [2024-10-17 16:51:36.329354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.546 ms 00:37:00.090 [2024-10-17 16:51:36.329393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.090 [2024-10-17 16:51:36.334396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.090 [2024-10-17 16:51:36.334521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:37:00.090 [2024-10-17 16:51:36.334658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.954 ms 00:37:00.090 [2024-10-17 16:51:36.334707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.090 [2024-10-17 16:51:36.372070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.090 [2024-10-17 16:51:36.372213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:37:00.090 [2024-10-17 16:51:36.372235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.338 ms 00:37:00.090 [2024-10-17 16:51:36.372247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.349 [2024-10-17 16:51:36.393303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.349 [2024-10-17 16:51:36.393439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:37:00.349 [2024-10-17 16:51:36.393462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.055 ms 00:37:00.349 [2024-10-17 16:51:36.393474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.349 [2024-10-17 16:51:36.502449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.349 [2024-10-17 16:51:36.502494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:37:00.349 [2024-10-17 16:51:36.502509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.086 ms 00:37:00.349 [2024-10-17 16:51:36.502520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.349 [2024-10-17 16:51:36.539744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.349 [2024-10-17 16:51:36.539782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:37:00.349 [2024-10-17 16:51:36.539798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.259 ms 00:37:00.349 [2024-10-17 16:51:36.539809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.349 [2024-10-17 16:51:36.575833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.349 [2024-10-17 16:51:36.575881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:37:00.349 [2024-10-17 16:51:36.575896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.044 ms 00:37:00.349 [2024-10-17 16:51:36.575907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.349 [2024-10-17 16:51:36.610970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.349 [2024-10-17 16:51:36.611105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:37:00.349 [2024-10-17 16:51:36.611125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.084 ms 00:37:00.349 [2024-10-17 16:51:36.611151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.609 [2024-10-17 16:51:36.647448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.609 [2024-10-17 16:51:36.647491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:37:00.609 [2024-10-17 16:51:36.647507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.282 ms 00:37:00.609 [2024-10-17 16:51:36.647517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.609 [2024-10-17 16:51:36.647556] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:00.609 [2024-10-17 16:51:36.647573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104448 / 261120 wr_cnt: 1 state: open 00:37:00.609 [2024-10-17 16:51:36.647587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:37:00.609 [2024-10-17 16:51:36.647599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:00.609 [2024-10-17 16:51:36.647610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.647994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:00.610 [2024-10-17 16:51:36.648642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:00.611 [2024-10-17 16:51:36.648653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:00.611 [2024-10-17 16:51:36.648665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:00.611 [2024-10-17 16:51:36.648676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:00.611 [2024-10-17 16:51:36.648686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:00.611 [2024-10-17 16:51:36.648712] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:00.611 [2024-10-17 16:51:36.648723] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3b62d67c-a34f-4c56-8033-432fbb454aa3 00:37:00.611 [2024-10-17 16:51:36.648734] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104448 00:37:00.611 [2024-10-17 16:51:36.648744] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105408 00:37:00.611 [2024-10-17 16:51:36.648770] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104448 00:37:00.611 [2024-10-17 16:51:36.648781] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:37:00.611 [2024-10-17 16:51:36.648792] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:00.611 [2024-10-17 16:51:36.648803] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:00.611 [2024-10-17 16:51:36.648813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:00.611 [2024-10-17 16:51:36.648822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:00.611 [2024-10-17 16:51:36.648831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:00.611 [2024-10-17 16:51:36.648840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.611 [2024-10-17 16:51:36.648851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:00.611 [2024-10-17 16:51:36.648862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.288 ms 00:37:00.611 [2024-10-17 16:51:36.648873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.611 [2024-10-17 16:51:36.668936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.611 [2024-10-17 16:51:36.668981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:00.611 [2024-10-17 16:51:36.668996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.059 ms 00:37:00.611 [2024-10-17 16:51:36.669022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.611 [2024-10-17 16:51:36.669579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:00.611 [2024-10-17 16:51:36.669594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:00.611 [2024-10-17 16:51:36.669606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:37:00.611 [2024-10-17 16:51:36.669616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.611 [2024-10-17 16:51:36.719135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:00.611 [2024-10-17 16:51:36.719180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:00.611 [2024-10-17 16:51:36.719195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:00.611 [2024-10-17 16:51:36.719206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.611 [2024-10-17 16:51:36.719270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:00.611 [2024-10-17 16:51:36.719281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:00.611 [2024-10-17 16:51:36.719293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:00.611 [2024-10-17 16:51:36.719303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.611 [2024-10-17 16:51:36.719376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:00.611 [2024-10-17 16:51:36.719390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:00.611 [2024-10-17 16:51:36.719400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:00.611 [2024-10-17 16:51:36.719410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.611 [2024-10-17 16:51:36.719427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:00.611 [2024-10-17 16:51:36.719438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:00.611 [2024-10-17 16:51:36.719448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:00.611 [2024-10-17 16:51:36.719458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.611 [2024-10-17 16:51:36.843261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:00.611 [2024-10-17 16:51:36.843322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:00.611 [2024-10-17 16:51:36.843338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:00.611 [2024-10-17 16:51:36.843350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.931 [2024-10-17 16:51:36.943385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:00.931 [2024-10-17 16:51:36.943442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:00.931 [2024-10-17 16:51:36.943457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:00.931 [2024-10-17 16:51:36.943467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.931 [2024-10-17 16:51:36.943556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:00.931 [2024-10-17 16:51:36.943574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:00.931 [2024-10-17 16:51:36.943585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:00.931 [2024-10-17 16:51:36.943596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.931 [2024-10-17 16:51:36.943641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:00.931 [2024-10-17 16:51:36.943652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:00.931 [2024-10-17 16:51:36.943663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:00.931 [2024-10-17 16:51:36.943673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.931 [2024-10-17 16:51:36.943816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:00.931 [2024-10-17 16:51:36.943831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:00.931 [2024-10-17 16:51:36.943847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:00.931 [2024-10-17 16:51:36.943858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.931 [2024-10-17 16:51:36.943899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:00.931 [2024-10-17 16:51:36.943911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:00.931 [2024-10-17 16:51:36.943921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:00.931 [2024-10-17 16:51:36.943932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.931 [2024-10-17 16:51:36.943968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:00.931 [2024-10-17 16:51:36.943978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:00.931 [2024-10-17 16:51:36.943993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:00.931 [2024-10-17 16:51:36.944003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.931 [2024-10-17 16:51:36.944045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:00.931 [2024-10-17 16:51:36.944057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:00.931 [2024-10-17 16:51:36.944068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:00.931 [2024-10-17 16:51:36.944078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:00.931 [2024-10-17 16:51:36.944200] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 653.621 ms, result 0 00:37:02.307 00:37:02.307 00:37:02.307 16:51:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:37:03.684 16:51:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:03.943 [2024-10-17 16:51:40.023861] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:37:03.943 [2024-10-17 16:51:40.024234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79552 ] 00:37:03.943 [2024-10-17 16:51:40.210773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:04.203 [2024-10-17 16:51:40.323834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:04.462 [2024-10-17 16:51:40.686074] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:04.462 [2024-10-17 16:51:40.686140] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:04.722 [2024-10-17 16:51:40.846588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.722 [2024-10-17 16:51:40.846645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:37:04.722 [2024-10-17 16:51:40.846661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:37:04.722 [2024-10-17 16:51:40.846678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.722 [2024-10-17 16:51:40.846739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.722 [2024-10-17 16:51:40.846752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:04.722 [2024-10-17 16:51:40.846763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:37:04.722 [2024-10-17 16:51:40.846777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.722 [2024-10-17 16:51:40.846799] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:37:04.722 [2024-10-17 16:51:40.847750] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:37:04.722 [2024-10-17 16:51:40.847909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.722 [2024-10-17 16:51:40.847931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:04.722 [2024-10-17 16:51:40.847942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:37:04.722 [2024-10-17 16:51:40.847953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.722 [2024-10-17 16:51:40.849456] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:37:04.722 [2024-10-17 16:51:40.868402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.722 [2024-10-17 16:51:40.868439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:37:04.722 [2024-10-17 16:51:40.868452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.978 ms 00:37:04.722 [2024-10-17 16:51:40.868478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.722 [2024-10-17 16:51:40.868541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.722 [2024-10-17 16:51:40.868556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:37:04.722 [2024-10-17 16:51:40.868567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:37:04.722 [2024-10-17 16:51:40.868577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.722 [2024-10-17 16:51:40.875306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.722 [2024-10-17 16:51:40.875478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:04.723 [2024-10-17 16:51:40.875499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.672 ms 00:37:04.723 [2024-10-17 16:51:40.875510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:40.875595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:40.875609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:04.723 [2024-10-17 16:51:40.875620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:37:04.723 [2024-10-17 16:51:40.875630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:40.875672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:40.875683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:37:04.723 [2024-10-17 16:51:40.875694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:37:04.723 [2024-10-17 16:51:40.875725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:40.875750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:37:04.723 [2024-10-17 16:51:40.880509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:40.880555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:04.723 [2024-10-17 16:51:40.880568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.773 ms 00:37:04.723 [2024-10-17 16:51:40.880579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:40.880613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:40.880624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:37:04.723 [2024-10-17 16:51:40.880635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:37:04.723 [2024-10-17 16:51:40.880645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:40.880713] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:37:04.723 [2024-10-17 16:51:40.880738] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:37:04.723 [2024-10-17 16:51:40.880773] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:37:04.723 [2024-10-17 16:51:40.880794] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:37:04.723 [2024-10-17 16:51:40.880884] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:37:04.723 [2024-10-17 16:51:40.880897] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:37:04.723 [2024-10-17 16:51:40.880910] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:37:04.723 [2024-10-17 16:51:40.880923] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:37:04.723 [2024-10-17 16:51:40.880935] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:37:04.723 [2024-10-17 16:51:40.880946] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:37:04.723 [2024-10-17 16:51:40.880956] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:37:04.723 [2024-10-17 16:51:40.880966] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:37:04.723 [2024-10-17 16:51:40.880977] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:37:04.723 [2024-10-17 16:51:40.880988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:40.881001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:37:04.723 [2024-10-17 16:51:40.881012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:37:04.723 [2024-10-17 16:51:40.881022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:40.881092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:40.881107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:37:04.723 [2024-10-17 16:51:40.881118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:37:04.723 [2024-10-17 16:51:40.881128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:40.881222] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:37:04.723 [2024-10-17 16:51:40.881237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:37:04.723 [2024-10-17 16:51:40.881252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:04.723 [2024-10-17 16:51:40.881262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:04.723 [2024-10-17 16:51:40.881273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:37:04.723 [2024-10-17 16:51:40.881282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:37:04.723 [2024-10-17 16:51:40.881292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:37:04.723 [2024-10-17 16:51:40.881302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:37:04.723 [2024-10-17 16:51:40.881312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:37:04.723 [2024-10-17 16:51:40.881321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:04.723 [2024-10-17 16:51:40.881331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:37:04.723 [2024-10-17 16:51:40.881341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:37:04.723 [2024-10-17 16:51:40.881351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:04.723 [2024-10-17 16:51:40.881360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:37:04.723 [2024-10-17 16:51:40.881370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:37:04.723 [2024-10-17 16:51:40.881388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:04.723 [2024-10-17 16:51:40.881398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:37:04.723 [2024-10-17 16:51:40.881407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:37:04.723 [2024-10-17 16:51:40.881417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:04.723 [2024-10-17 16:51:40.881427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:37:04.723 [2024-10-17 16:51:40.881437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:37:04.723 [2024-10-17 16:51:40.881446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:04.723 [2024-10-17 16:51:40.881456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:37:04.723 [2024-10-17 16:51:40.881466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:37:04.723 [2024-10-17 16:51:40.881475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:04.723 [2024-10-17 16:51:40.881484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:37:04.723 [2024-10-17 16:51:40.881493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:37:04.723 [2024-10-17 16:51:40.881503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:04.723 [2024-10-17 16:51:40.881512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:37:04.723 [2024-10-17 16:51:40.881521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:37:04.723 [2024-10-17 16:51:40.881530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:04.723 [2024-10-17 16:51:40.881539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:37:04.723 [2024-10-17 16:51:40.881549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:37:04.723 [2024-10-17 16:51:40.881559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:04.723 [2024-10-17 16:51:40.881568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:37:04.723 [2024-10-17 16:51:40.881577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:37:04.723 [2024-10-17 16:51:40.881586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:04.723 [2024-10-17 16:51:40.881595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:37:04.723 [2024-10-17 16:51:40.881605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:37:04.723 [2024-10-17 16:51:40.881614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:04.723 [2024-10-17 16:51:40.881623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:37:04.723 [2024-10-17 16:51:40.881632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:37:04.723 [2024-10-17 16:51:40.881643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:04.723 [2024-10-17 16:51:40.881652] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:37:04.723 [2024-10-17 16:51:40.881662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:37:04.723 [2024-10-17 16:51:40.881672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:04.723 [2024-10-17 16:51:40.881683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:04.723 [2024-10-17 16:51:40.881692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:37:04.723 [2024-10-17 16:51:40.881713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:37:04.723 [2024-10-17 16:51:40.881723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:37:04.723 [2024-10-17 16:51:40.881733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:37:04.723 [2024-10-17 16:51:40.881742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:37:04.723 [2024-10-17 16:51:40.881751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:37:04.723 [2024-10-17 16:51:40.881762] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:37:04.723 [2024-10-17 16:51:40.881774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:04.723 [2024-10-17 16:51:40.881786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:37:04.723 [2024-10-17 16:51:40.881797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:37:04.723 [2024-10-17 16:51:40.881807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:37:04.723 [2024-10-17 16:51:40.881818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:37:04.723 [2024-10-17 16:51:40.881828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:37:04.723 [2024-10-17 16:51:40.881839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:37:04.723 [2024-10-17 16:51:40.881849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:37:04.723 [2024-10-17 16:51:40.881860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:37:04.723 [2024-10-17 16:51:40.881870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:37:04.723 [2024-10-17 16:51:40.881881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:37:04.723 [2024-10-17 16:51:40.881892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:37:04.723 [2024-10-17 16:51:40.881903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:37:04.723 [2024-10-17 16:51:40.881913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:37:04.723 [2024-10-17 16:51:40.881923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:37:04.723 [2024-10-17 16:51:40.881933] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:37:04.723 [2024-10-17 16:51:40.881945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:04.723 [2024-10-17 16:51:40.881960] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:04.723 [2024-10-17 16:51:40.881970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:37:04.723 [2024-10-17 16:51:40.881980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:37:04.723 [2024-10-17 16:51:40.881992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:37:04.723 [2024-10-17 16:51:40.882003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:40.882014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:37:04.723 [2024-10-17 16:51:40.882024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:37:04.723 [2024-10-17 16:51:40.882034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:40.921303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:40.921456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:04.723 [2024-10-17 16:51:40.921543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.287 ms 00:37:04.723 [2024-10-17 16:51:40.921579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:40.921662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:40.921680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:37:04.723 [2024-10-17 16:51:40.921691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:37:04.723 [2024-10-17 16:51:40.921726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:40.974920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:40.974959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:04.723 [2024-10-17 16:51:40.974974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.220 ms 00:37:04.723 [2024-10-17 16:51:40.974985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:40.975025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:40.975036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:04.723 [2024-10-17 16:51:40.975048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:37:04.723 [2024-10-17 16:51:40.975058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:40.975537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:40.975552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:04.723 [2024-10-17 16:51:40.975563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:37:04.723 [2024-10-17 16:51:40.975573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:40.975693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:40.975729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:04.723 [2024-10-17 16:51:40.975740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:37:04.723 [2024-10-17 16:51:40.975750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:40.995117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:40.995153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:04.723 [2024-10-17 16:51:40.995168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.373 ms 00:37:04.723 [2024-10-17 16:51:40.995182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.723 [2024-10-17 16:51:41.014454] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:37:04.723 [2024-10-17 16:51:41.014491] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:37:04.723 [2024-10-17 16:51:41.014507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.723 [2024-10-17 16:51:41.014518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:37:04.723 [2024-10-17 16:51:41.014530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.244 ms 00:37:04.723 [2024-10-17 16:51:41.014540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.982 [2024-10-17 16:51:41.044115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.982 [2024-10-17 16:51:41.044172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:37:04.982 [2024-10-17 16:51:41.044192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.582 ms 00:37:04.982 [2024-10-17 16:51:41.044203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.982 [2024-10-17 16:51:41.062773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.982 [2024-10-17 16:51:41.062818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:37:04.982 [2024-10-17 16:51:41.062832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.556 ms 00:37:04.982 [2024-10-17 16:51:41.062857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.982 [2024-10-17 16:51:41.080292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.982 [2024-10-17 16:51:41.080326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:37:04.983 [2024-10-17 16:51:41.080338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.424 ms 00:37:04.983 [2024-10-17 16:51:41.080348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.983 [2024-10-17 16:51:41.081127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.983 [2024-10-17 16:51:41.081157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:37:04.983 [2024-10-17 16:51:41.081169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:37:04.983 [2024-10-17 16:51:41.081179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.983 [2024-10-17 16:51:41.165201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.983 [2024-10-17 16:51:41.165268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:37:04.983 [2024-10-17 16:51:41.165286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.121 ms 00:37:04.983 [2024-10-17 16:51:41.165303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.983 [2024-10-17 16:51:41.176370] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:37:04.983 [2024-10-17 16:51:41.179541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.983 [2024-10-17 16:51:41.179675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:37:04.983 [2024-10-17 16:51:41.179727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.199 ms 00:37:04.983 [2024-10-17 16:51:41.179740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.983 [2024-10-17 16:51:41.179844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.983 [2024-10-17 16:51:41.179856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:37:04.983 [2024-10-17 16:51:41.179868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:37:04.983 [2024-10-17 16:51:41.179878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.983 [2024-10-17 16:51:41.181365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.983 [2024-10-17 16:51:41.181405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:37:04.983 [2024-10-17 16:51:41.181418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:37:04.983 [2024-10-17 16:51:41.181428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.983 [2024-10-17 16:51:41.181460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.983 [2024-10-17 16:51:41.181472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:37:04.983 [2024-10-17 16:51:41.181482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:37:04.983 [2024-10-17 16:51:41.181492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.983 [2024-10-17 16:51:41.181552] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:37:04.983 [2024-10-17 16:51:41.181566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.983 [2024-10-17 16:51:41.181580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:37:04.983 [2024-10-17 16:51:41.181591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:37:04.983 [2024-10-17 16:51:41.181601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.983 [2024-10-17 16:51:41.217919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.983 [2024-10-17 16:51:41.218055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:37:04.983 [2024-10-17 16:51:41.218076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.356 ms 00:37:04.983 [2024-10-17 16:51:41.218087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.983 [2024-10-17 16:51:41.218190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:04.983 [2024-10-17 16:51:41.218205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:37:04.983 [2024-10-17 16:51:41.218216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:37:04.983 [2024-10-17 16:51:41.218226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:04.983 [2024-10-17 16:51:41.219280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.855 ms, result 0 00:37:06.374  [2024-10-17T16:51:43.666Z] Copying: 1204/1048576 [kB] (1204 kBps) [2024-10-17T16:51:44.603Z] Copying: 9656/1048576 [kB] (8452 kBps) [2024-10-17T16:51:45.540Z] Copying: 44/1024 [MB] (34 MBps) [2024-10-17T16:51:46.476Z] Copying: 79/1024 [MB] (34 MBps) [2024-10-17T16:51:47.852Z] Copying: 114/1024 [MB] (35 MBps) [2024-10-17T16:51:48.785Z] Copying: 149/1024 [MB] (34 MBps) [2024-10-17T16:51:49.724Z] Copying: 182/1024 [MB] (33 MBps) [2024-10-17T16:51:50.669Z] Copying: 216/1024 [MB] (34 MBps) [2024-10-17T16:51:51.605Z] Copying: 251/1024 [MB] (34 MBps) [2024-10-17T16:51:52.541Z] Copying: 286/1024 [MB] (35 MBps) [2024-10-17T16:51:53.477Z] Copying: 322/1024 [MB] (35 MBps) [2024-10-17T16:51:54.854Z] Copying: 357/1024 [MB] (35 MBps) [2024-10-17T16:51:55.422Z] Copying: 392/1024 [MB] (34 MBps) [2024-10-17T16:51:56.825Z] Copying: 426/1024 [MB] (34 MBps) [2024-10-17T16:51:57.759Z] Copying: 461/1024 [MB] (34 MBps) [2024-10-17T16:51:58.694Z] Copying: 496/1024 [MB] (35 MBps) [2024-10-17T16:51:59.633Z] Copying: 531/1024 [MB] (34 MBps) [2024-10-17T16:52:00.570Z] Copying: 566/1024 [MB] (35 MBps) [2024-10-17T16:52:01.507Z] Copying: 602/1024 [MB] (35 MBps) [2024-10-17T16:52:02.443Z] Copying: 637/1024 [MB] (34 MBps) [2024-10-17T16:52:03.820Z] Copying: 673/1024 [MB] (36 MBps) [2024-10-17T16:52:04.754Z] Copying: 709/1024 [MB] (35 MBps) [2024-10-17T16:52:05.691Z] Copying: 744/1024 [MB] (35 MBps) [2024-10-17T16:52:06.628Z] Copying: 780/1024 [MB] (35 MBps) [2024-10-17T16:52:07.565Z] Copying: 816/1024 [MB] (35 MBps) [2024-10-17T16:52:08.499Z] Copying: 851/1024 [MB] (34 MBps) [2024-10-17T16:52:09.433Z] Copying: 884/1024 [MB] (33 MBps) [2024-10-17T16:52:10.809Z] Copying: 918/1024 [MB] (33 MBps) [2024-10-17T16:52:11.745Z] Copying: 952/1024 [MB] (34 MBps) [2024-10-17T16:52:12.682Z] Copying: 987/1024 [MB] (34 MBps) [2024-10-17T16:52:12.682Z] Copying: 1021/1024 [MB] (34 MBps) [2024-10-17T16:52:13.251Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-10-17 16:52:12.970580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.952 [2024-10-17 16:52:12.970669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:36.952 [2024-10-17 16:52:12.970691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:37:36.952 [2024-10-17 16:52:12.970735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.952 [2024-10-17 16:52:12.970787] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:36.952 [2024-10-17 16:52:12.974883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.952 [2024-10-17 16:52:12.974935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:36.952 [2024-10-17 16:52:12.974953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.073 ms 00:37:36.952 [2024-10-17 16:52:12.974968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.952 [2024-10-17 16:52:12.976329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.952 [2024-10-17 16:52:12.976373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:36.952 [2024-10-17 16:52:12.976390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:37:36.952 [2024-10-17 16:52:12.976411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.952 [2024-10-17 16:52:12.986145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.952 [2024-10-17 16:52:12.986297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:37:36.952 [2024-10-17 16:52:12.986390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.715 ms 00:37:36.952 [2024-10-17 16:52:12.986430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.952 [2024-10-17 16:52:12.991535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.952 [2024-10-17 16:52:12.991664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:37:36.952 [2024-10-17 16:52:12.991684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.052 ms 00:37:36.952 [2024-10-17 16:52:12.991695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.952 [2024-10-17 16:52:13.029029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.952 [2024-10-17 16:52:13.029096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:37:36.952 [2024-10-17 16:52:13.029113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.306 ms 00:37:36.952 [2024-10-17 16:52:13.029124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.952 [2024-10-17 16:52:13.050658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.952 [2024-10-17 16:52:13.050731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:37:36.952 [2024-10-17 16:52:13.050749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.502 ms 00:37:36.952 [2024-10-17 16:52:13.050760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.952 [2024-10-17 16:52:13.052765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.952 [2024-10-17 16:52:13.052910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:37:36.952 [2024-10-17 16:52:13.052933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.946 ms 00:37:36.952 [2024-10-17 16:52:13.052944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.952 [2024-10-17 16:52:13.088836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.952 [2024-10-17 16:52:13.088873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:37:36.952 [2024-10-17 16:52:13.088887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.925 ms 00:37:36.952 [2024-10-17 16:52:13.088898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.952 [2024-10-17 16:52:13.124173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.952 [2024-10-17 16:52:13.124335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:37:36.952 [2024-10-17 16:52:13.124368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.291 ms 00:37:36.952 [2024-10-17 16:52:13.124379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.952 [2024-10-17 16:52:13.159923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.952 [2024-10-17 16:52:13.159965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:37:36.952 [2024-10-17 16:52:13.159979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.564 ms 00:37:36.952 [2024-10-17 16:52:13.159990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.952 [2024-10-17 16:52:13.196830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.952 [2024-10-17 16:52:13.196879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:37:36.952 [2024-10-17 16:52:13.196895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.813 ms 00:37:36.952 [2024-10-17 16:52:13.196906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.952 [2024-10-17 16:52:13.196945] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:36.952 [2024-10-17 16:52:13.196962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:37:36.952 [2024-10-17 16:52:13.196975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:37:36.952 [2024-10-17 16:52:13.196987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:36.952 [2024-10-17 16:52:13.196999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:36.952 [2024-10-17 16:52:13.197011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:36.952 [2024-10-17 16:52:13.197022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:36.952 [2024-10-17 16:52:13.197033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:36.952 [2024-10-17 16:52:13.197044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:36.952 [2024-10-17 16:52:13.197055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:36.952 [2024-10-17 16:52:13.197066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:36.952 [2024-10-17 16:52:13.197077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:36.952 [2024-10-17 16:52:13.197089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:36.952 [2024-10-17 16:52:13.197100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:36.952 [2024-10-17 16:52:13.197111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:36.952 [2024-10-17 16:52:13.197122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:36.952 [2024-10-17 16:52:13.197133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:36.952 [2024-10-17 16:52:13.197145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.197994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:36.953 [2024-10-17 16:52:13.198300] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:36.953 [2024-10-17 16:52:13.198324] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3b62d67c-a34f-4c56-8033-432fbb454aa3 00:37:36.953 [2024-10-17 16:52:13.198335] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:37:36.954 [2024-10-17 16:52:13.198346] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 160192 00:37:36.954 [2024-10-17 16:52:13.198356] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 158208 00:37:36.954 [2024-10-17 16:52:13.198367] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:37:36.954 [2024-10-17 16:52:13.198382] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:36.954 [2024-10-17 16:52:13.198392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:36.954 [2024-10-17 16:52:13.198403] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:36.954 [2024-10-17 16:52:13.198423] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:36.954 [2024-10-17 16:52:13.198433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:36.954 [2024-10-17 16:52:13.198444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.954 [2024-10-17 16:52:13.198455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:36.954 [2024-10-17 16:52:13.198466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.501 ms 00:37:36.954 [2024-10-17 16:52:13.198476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.954 [2024-10-17 16:52:13.218247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.954 [2024-10-17 16:52:13.218285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:36.954 [2024-10-17 16:52:13.218305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.760 ms 00:37:36.954 [2024-10-17 16:52:13.218323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:36.954 [2024-10-17 16:52:13.218846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:36.954 [2024-10-17 16:52:13.218878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:36.954 [2024-10-17 16:52:13.218891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:37:36.954 [2024-10-17 16:52:13.218902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.213 [2024-10-17 16:52:13.270219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:37.213 [2024-10-17 16:52:13.270282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:37.213 [2024-10-17 16:52:13.270299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:37.213 [2024-10-17 16:52:13.270310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.213 [2024-10-17 16:52:13.270380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:37.213 [2024-10-17 16:52:13.270390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:37.213 [2024-10-17 16:52:13.270401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:37.213 [2024-10-17 16:52:13.270412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.213 [2024-10-17 16:52:13.270486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:37.213 [2024-10-17 16:52:13.270499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:37.213 [2024-10-17 16:52:13.270515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:37.213 [2024-10-17 16:52:13.270525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.213 [2024-10-17 16:52:13.270543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:37.213 [2024-10-17 16:52:13.270554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:37.213 [2024-10-17 16:52:13.270564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:37.213 [2024-10-17 16:52:13.270574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.213 [2024-10-17 16:52:13.394783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:37.213 [2024-10-17 16:52:13.394854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:37.213 [2024-10-17 16:52:13.394871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:37.213 [2024-10-17 16:52:13.394881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.213 [2024-10-17 16:52:13.495403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:37.213 [2024-10-17 16:52:13.495469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:37.213 [2024-10-17 16:52:13.495485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:37.213 [2024-10-17 16:52:13.495496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.213 [2024-10-17 16:52:13.495592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:37.213 [2024-10-17 16:52:13.495604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:37.213 [2024-10-17 16:52:13.495615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:37.213 [2024-10-17 16:52:13.495633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.213 [2024-10-17 16:52:13.495684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:37.213 [2024-10-17 16:52:13.495696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:37.213 [2024-10-17 16:52:13.495734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:37.213 [2024-10-17 16:52:13.495765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.213 [2024-10-17 16:52:13.495897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:37.213 [2024-10-17 16:52:13.495911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:37.213 [2024-10-17 16:52:13.495922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:37.213 [2024-10-17 16:52:13.495932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.213 [2024-10-17 16:52:13.495973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:37.213 [2024-10-17 16:52:13.495986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:37.214 [2024-10-17 16:52:13.495996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:37.214 [2024-10-17 16:52:13.496006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.214 [2024-10-17 16:52:13.496044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:37.214 [2024-10-17 16:52:13.496055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:37.214 [2024-10-17 16:52:13.496065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:37.214 [2024-10-17 16:52:13.496076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.214 [2024-10-17 16:52:13.496120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:37.214 [2024-10-17 16:52:13.496132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:37.214 [2024-10-17 16:52:13.496143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:37.214 [2024-10-17 16:52:13.496153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:37.214 [2024-10-17 16:52:13.496279] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 526.543 ms, result 0 00:37:38.592 00:37:38.592 00:37:38.592 16:52:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:37:39.996 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:37:39.996 16:52:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:40.255 [2024-10-17 16:52:16.343796] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:37:40.255 [2024-10-17 16:52:16.343942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79916 ] 00:37:40.255 [2024-10-17 16:52:16.517392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:40.513 [2024-10-17 16:52:16.632249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:40.772 [2024-10-17 16:52:16.981350] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:40.772 [2024-10-17 16:52:16.981425] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:37:41.031 [2024-10-17 16:52:17.142439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.031 [2024-10-17 16:52:17.142498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:37:41.031 [2024-10-17 16:52:17.142516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:37:41.031 [2024-10-17 16:52:17.142532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.031 [2024-10-17 16:52:17.142579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.031 [2024-10-17 16:52:17.142591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:41.032 [2024-10-17 16:52:17.142602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:37:41.032 [2024-10-17 16:52:17.142615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.032 [2024-10-17 16:52:17.142637] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:37:41.032 [2024-10-17 16:52:17.143567] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:37:41.032 [2024-10-17 16:52:17.143595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.032 [2024-10-17 16:52:17.143609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:41.032 [2024-10-17 16:52:17.143621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:37:41.032 [2024-10-17 16:52:17.143631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.032 [2024-10-17 16:52:17.145101] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:37:41.032 [2024-10-17 16:52:17.163607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.032 [2024-10-17 16:52:17.163644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:37:41.032 [2024-10-17 16:52:17.163659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.537 ms 00:37:41.032 [2024-10-17 16:52:17.163686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.032 [2024-10-17 16:52:17.163758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.032 [2024-10-17 16:52:17.163775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:37:41.032 [2024-10-17 16:52:17.163787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:37:41.032 [2024-10-17 16:52:17.163797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.032 [2024-10-17 16:52:17.170483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.032 [2024-10-17 16:52:17.170642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:41.032 [2024-10-17 16:52:17.170678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.625 ms 00:37:41.032 [2024-10-17 16:52:17.170690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.032 [2024-10-17 16:52:17.170788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.032 [2024-10-17 16:52:17.170802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:41.032 [2024-10-17 16:52:17.170813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:37:41.032 [2024-10-17 16:52:17.170823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.032 [2024-10-17 16:52:17.170867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.032 [2024-10-17 16:52:17.170878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:37:41.032 [2024-10-17 16:52:17.170890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:37:41.032 [2024-10-17 16:52:17.170899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.032 [2024-10-17 16:52:17.170924] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:37:41.032 [2024-10-17 16:52:17.175782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.032 [2024-10-17 16:52:17.175811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:41.032 [2024-10-17 16:52:17.175823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.871 ms 00:37:41.032 [2024-10-17 16:52:17.175833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.032 [2024-10-17 16:52:17.175883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.032 [2024-10-17 16:52:17.175894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:37:41.032 [2024-10-17 16:52:17.175905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:37:41.032 [2024-10-17 16:52:17.175915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.032 [2024-10-17 16:52:17.175969] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:37:41.032 [2024-10-17 16:52:17.175993] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:37:41.032 [2024-10-17 16:52:17.176029] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:37:41.032 [2024-10-17 16:52:17.176049] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:37:41.032 [2024-10-17 16:52:17.176138] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:37:41.032 [2024-10-17 16:52:17.176151] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:37:41.032 [2024-10-17 16:52:17.176165] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:37:41.032 [2024-10-17 16:52:17.176178] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:37:41.032 [2024-10-17 16:52:17.176191] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:37:41.032 [2024-10-17 16:52:17.176202] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:37:41.032 [2024-10-17 16:52:17.176212] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:37:41.032 [2024-10-17 16:52:17.176222] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:37:41.032 [2024-10-17 16:52:17.176232] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:37:41.032 [2024-10-17 16:52:17.176243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.032 [2024-10-17 16:52:17.176257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:37:41.032 [2024-10-17 16:52:17.176267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:37:41.032 [2024-10-17 16:52:17.176277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.032 [2024-10-17 16:52:17.176359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.032 [2024-10-17 16:52:17.176370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:37:41.032 [2024-10-17 16:52:17.176380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:37:41.032 [2024-10-17 16:52:17.176389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.032 [2024-10-17 16:52:17.176482] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:37:41.032 [2024-10-17 16:52:17.176497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:37:41.032 [2024-10-17 16:52:17.176511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:41.032 [2024-10-17 16:52:17.176531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:41.032 [2024-10-17 16:52:17.176541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:37:41.032 [2024-10-17 16:52:17.176567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:37:41.032 [2024-10-17 16:52:17.176577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:37:41.032 [2024-10-17 16:52:17.176587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:37:41.032 [2024-10-17 16:52:17.176597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:37:41.032 [2024-10-17 16:52:17.176606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:41.032 [2024-10-17 16:52:17.176616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:37:41.032 [2024-10-17 16:52:17.176627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:37:41.032 [2024-10-17 16:52:17.176637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:41.032 [2024-10-17 16:52:17.176646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:37:41.032 [2024-10-17 16:52:17.176656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:37:41.032 [2024-10-17 16:52:17.176676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:41.032 [2024-10-17 16:52:17.176685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:37:41.032 [2024-10-17 16:52:17.176694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:37:41.032 [2024-10-17 16:52:17.176704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:41.032 [2024-10-17 16:52:17.176867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:37:41.032 [2024-10-17 16:52:17.176915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:37:41.032 [2024-10-17 16:52:17.176947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:41.032 [2024-10-17 16:52:17.176977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:37:41.032 [2024-10-17 16:52:17.177007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:37:41.032 [2024-10-17 16:52:17.177036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:41.032 [2024-10-17 16:52:17.177065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:37:41.032 [2024-10-17 16:52:17.177094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:37:41.032 [2024-10-17 16:52:17.177174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:41.032 [2024-10-17 16:52:17.177209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:37:41.032 [2024-10-17 16:52:17.177238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:37:41.032 [2024-10-17 16:52:17.177267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:41.032 [2024-10-17 16:52:17.177296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:37:41.032 [2024-10-17 16:52:17.177325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:37:41.032 [2024-10-17 16:52:17.177354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:41.032 [2024-10-17 16:52:17.177428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:37:41.032 [2024-10-17 16:52:17.177461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:37:41.032 [2024-10-17 16:52:17.177491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:41.032 [2024-10-17 16:52:17.177520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:37:41.032 [2024-10-17 16:52:17.177549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:37:41.032 [2024-10-17 16:52:17.177579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:41.032 [2024-10-17 16:52:17.177608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:37:41.032 [2024-10-17 16:52:17.177770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:37:41.032 [2024-10-17 16:52:17.177803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:41.032 [2024-10-17 16:52:17.177832] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:37:41.032 [2024-10-17 16:52:17.177863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:37:41.032 [2024-10-17 16:52:17.177892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:41.032 [2024-10-17 16:52:17.177922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:41.032 [2024-10-17 16:52:17.178001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:37:41.032 [2024-10-17 16:52:17.178037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:37:41.032 [2024-10-17 16:52:17.178066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:37:41.032 [2024-10-17 16:52:17.178095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:37:41.033 [2024-10-17 16:52:17.178124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:37:41.033 [2024-10-17 16:52:17.178153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:37:41.033 [2024-10-17 16:52:17.178167] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:37:41.033 [2024-10-17 16:52:17.178181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:41.033 [2024-10-17 16:52:17.178193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:37:41.033 [2024-10-17 16:52:17.178204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:37:41.033 [2024-10-17 16:52:17.178214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:37:41.033 [2024-10-17 16:52:17.178225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:37:41.033 [2024-10-17 16:52:17.178235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:37:41.033 [2024-10-17 16:52:17.178246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:37:41.033 [2024-10-17 16:52:17.178257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:37:41.033 [2024-10-17 16:52:17.178267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:37:41.033 [2024-10-17 16:52:17.178277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:37:41.033 [2024-10-17 16:52:17.178287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:37:41.033 [2024-10-17 16:52:17.178297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:37:41.033 [2024-10-17 16:52:17.178308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:37:41.033 [2024-10-17 16:52:17.178318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:37:41.033 [2024-10-17 16:52:17.178328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:37:41.033 [2024-10-17 16:52:17.178338] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:37:41.033 [2024-10-17 16:52:17.178350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:41.033 [2024-10-17 16:52:17.178368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:41.033 [2024-10-17 16:52:17.178378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:37:41.033 [2024-10-17 16:52:17.178389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:37:41.033 [2024-10-17 16:52:17.178399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:37:41.033 [2024-10-17 16:52:17.178412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.033 [2024-10-17 16:52:17.178422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:37:41.033 [2024-10-17 16:52:17.178433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.985 ms 00:37:41.033 [2024-10-17 16:52:17.178444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.033 [2024-10-17 16:52:17.218463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.033 [2024-10-17 16:52:17.218632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:41.033 [2024-10-17 16:52:17.218719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.026 ms 00:37:41.033 [2024-10-17 16:52:17.218757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.033 [2024-10-17 16:52:17.218868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.033 [2024-10-17 16:52:17.218955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:37:41.033 [2024-10-17 16:52:17.218992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:37:41.033 [2024-10-17 16:52:17.219022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.033 [2024-10-17 16:52:17.275038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.033 [2024-10-17 16:52:17.275194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:41.033 [2024-10-17 16:52:17.275268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.978 ms 00:37:41.033 [2024-10-17 16:52:17.275303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.033 [2024-10-17 16:52:17.275367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.033 [2024-10-17 16:52:17.275399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:41.033 [2024-10-17 16:52:17.275430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:37:41.033 [2024-10-17 16:52:17.275460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.033 [2024-10-17 16:52:17.276037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.033 [2024-10-17 16:52:17.276148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:41.033 [2024-10-17 16:52:17.276219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:37:41.033 [2024-10-17 16:52:17.276253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.033 [2024-10-17 16:52:17.276403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.033 [2024-10-17 16:52:17.276439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:41.033 [2024-10-17 16:52:17.276510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:37:41.033 [2024-10-17 16:52:17.276556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.033 [2024-10-17 16:52:17.294990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.033 [2024-10-17 16:52:17.295124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:41.033 [2024-10-17 16:52:17.295210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.410 ms 00:37:41.033 [2024-10-17 16:52:17.295251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.033 [2024-10-17 16:52:17.314139] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:37:41.033 [2024-10-17 16:52:17.314294] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:37:41.033 [2024-10-17 16:52:17.314386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.033 [2024-10-17 16:52:17.314418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:37:41.033 [2024-10-17 16:52:17.314450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.030 ms 00:37:41.033 [2024-10-17 16:52:17.314481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.292 [2024-10-17 16:52:17.344061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.292 [2024-10-17 16:52:17.344248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:37:41.292 [2024-10-17 16:52:17.344336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.570 ms 00:37:41.292 [2024-10-17 16:52:17.344372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.292 [2024-10-17 16:52:17.363092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.292 [2024-10-17 16:52:17.363230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:37:41.292 [2024-10-17 16:52:17.363315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.684 ms 00:37:41.292 [2024-10-17 16:52:17.363350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.292 [2024-10-17 16:52:17.381180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.292 [2024-10-17 16:52:17.381313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:37:41.292 [2024-10-17 16:52:17.381382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.790 ms 00:37:41.292 [2024-10-17 16:52:17.381416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.292 [2024-10-17 16:52:17.382315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.292 [2024-10-17 16:52:17.382443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:37:41.292 [2024-10-17 16:52:17.382522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:37:41.292 [2024-10-17 16:52:17.382557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.292 [2024-10-17 16:52:17.466412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.292 [2024-10-17 16:52:17.466667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:37:41.292 [2024-10-17 16:52:17.466773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.939 ms 00:37:41.292 [2024-10-17 16:52:17.466835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.292 [2024-10-17 16:52:17.477829] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:37:41.292 [2024-10-17 16:52:17.480911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.292 [2024-10-17 16:52:17.480943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:37:41.292 [2024-10-17 16:52:17.480960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.029 ms 00:37:41.292 [2024-10-17 16:52:17.480971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.292 [2024-10-17 16:52:17.481066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.292 [2024-10-17 16:52:17.481079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:37:41.292 [2024-10-17 16:52:17.481092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:37:41.292 [2024-10-17 16:52:17.481102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.292 [2024-10-17 16:52:17.481965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.292 [2024-10-17 16:52:17.481982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:37:41.292 [2024-10-17 16:52:17.481994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.821 ms 00:37:41.292 [2024-10-17 16:52:17.482005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.292 [2024-10-17 16:52:17.482027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.292 [2024-10-17 16:52:17.482038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:37:41.292 [2024-10-17 16:52:17.482049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:37:41.292 [2024-10-17 16:52:17.482059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.292 [2024-10-17 16:52:17.482114] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:37:41.292 [2024-10-17 16:52:17.482127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.292 [2024-10-17 16:52:17.482140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:37:41.292 [2024-10-17 16:52:17.482151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:37:41.292 [2024-10-17 16:52:17.482161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.292 [2024-10-17 16:52:17.518067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.292 [2024-10-17 16:52:17.518110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:37:41.292 [2024-10-17 16:52:17.518126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.943 ms 00:37:41.292 [2024-10-17 16:52:17.518137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.292 [2024-10-17 16:52:17.518220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:41.292 [2024-10-17 16:52:17.518233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:37:41.292 [2024-10-17 16:52:17.518245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:37:41.292 [2024-10-17 16:52:17.518255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:41.292 [2024-10-17 16:52:17.519334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 377.073 ms, result 0 00:37:42.668  [2024-10-17T16:52:19.902Z] Copying: 30/1024 [MB] (30 MBps) [2024-10-17T16:52:20.836Z] Copying: 57/1024 [MB] (27 MBps) [2024-10-17T16:52:21.772Z] Copying: 84/1024 [MB] (26 MBps) [2024-10-17T16:52:23.152Z] Copying: 110/1024 [MB] (26 MBps) [2024-10-17T16:52:24.090Z] Copying: 136/1024 [MB] (26 MBps) [2024-10-17T16:52:25.028Z] Copying: 163/1024 [MB] (26 MBps) [2024-10-17T16:52:25.965Z] Copying: 190/1024 [MB] (26 MBps) [2024-10-17T16:52:26.902Z] Copying: 217/1024 [MB] (27 MBps) [2024-10-17T16:52:27.838Z] Copying: 244/1024 [MB] (27 MBps) [2024-10-17T16:52:28.774Z] Copying: 271/1024 [MB] (26 MBps) [2024-10-17T16:52:30.154Z] Copying: 298/1024 [MB] (27 MBps) [2024-10-17T16:52:30.722Z] Copying: 326/1024 [MB] (27 MBps) [2024-10-17T16:52:32.098Z] Copying: 354/1024 [MB] (28 MBps) [2024-10-17T16:52:33.034Z] Copying: 384/1024 [MB] (29 MBps) [2024-10-17T16:52:33.970Z] Copying: 413/1024 [MB] (29 MBps) [2024-10-17T16:52:34.905Z] Copying: 441/1024 [MB] (27 MBps) [2024-10-17T16:52:35.839Z] Copying: 469/1024 [MB] (28 MBps) [2024-10-17T16:52:36.787Z] Copying: 498/1024 [MB] (28 MBps) [2024-10-17T16:52:37.722Z] Copying: 526/1024 [MB] (28 MBps) [2024-10-17T16:52:39.097Z] Copying: 554/1024 [MB] (28 MBps) [2024-10-17T16:52:40.030Z] Copying: 582/1024 [MB] (28 MBps) [2024-10-17T16:52:40.965Z] Copying: 610/1024 [MB] (27 MBps) [2024-10-17T16:52:41.902Z] Copying: 638/1024 [MB] (27 MBps) [2024-10-17T16:52:42.837Z] Copying: 663/1024 [MB] (25 MBps) [2024-10-17T16:52:43.771Z] Copying: 688/1024 [MB] (24 MBps) [2024-10-17T16:52:44.707Z] Copying: 714/1024 [MB] (25 MBps) [2024-10-17T16:52:46.082Z] Copying: 739/1024 [MB] (25 MBps) [2024-10-17T16:52:47.017Z] Copying: 764/1024 [MB] (25 MBps) [2024-10-17T16:52:47.952Z] Copying: 790/1024 [MB] (25 MBps) [2024-10-17T16:52:48.887Z] Copying: 814/1024 [MB] (24 MBps) [2024-10-17T16:52:49.836Z] Copying: 839/1024 [MB] (24 MBps) [2024-10-17T16:52:50.805Z] Copying: 863/1024 [MB] (24 MBps) [2024-10-17T16:52:51.741Z] Copying: 888/1024 [MB] (25 MBps) [2024-10-17T16:52:53.121Z] Copying: 912/1024 [MB] (24 MBps) [2024-10-17T16:52:53.690Z] Copying: 937/1024 [MB] (25 MBps) [2024-10-17T16:52:55.067Z] Copying: 962/1024 [MB] (24 MBps) [2024-10-17T16:52:56.004Z] Copying: 986/1024 [MB] (24 MBps) [2024-10-17T16:52:56.264Z] Copying: 1010/1024 [MB] (24 MBps) [2024-10-17T16:52:56.523Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-17 16:52:56.327049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.224 [2024-10-17 16:52:56.327159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:20.224 [2024-10-17 16:52:56.327189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:20.224 [2024-10-17 16:52:56.327208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.224 [2024-10-17 16:52:56.327250] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:20.224 [2024-10-17 16:52:56.334145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.224 [2024-10-17 16:52:56.334192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:20.224 [2024-10-17 16:52:56.334213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.874 ms 00:38:20.224 [2024-10-17 16:52:56.334231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.224 [2024-10-17 16:52:56.334532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.224 [2024-10-17 16:52:56.334553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:20.224 [2024-10-17 16:52:56.334571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:38:20.224 [2024-10-17 16:52:56.334588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.224 [2024-10-17 16:52:56.338233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.224 [2024-10-17 16:52:56.338286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:20.224 [2024-10-17 16:52:56.338304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.626 ms 00:38:20.224 [2024-10-17 16:52:56.338321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.224 [2024-10-17 16:52:56.344351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.224 [2024-10-17 16:52:56.344401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:20.224 [2024-10-17 16:52:56.344415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.010 ms 00:38:20.224 [2024-10-17 16:52:56.344429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.224 [2024-10-17 16:52:56.381925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.224 [2024-10-17 16:52:56.381968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:20.224 [2024-10-17 16:52:56.381986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.471 ms 00:38:20.224 [2024-10-17 16:52:56.381998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.224 [2024-10-17 16:52:56.419894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.224 [2024-10-17 16:52:56.419965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:20.224 [2024-10-17 16:52:56.419989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.905 ms 00:38:20.224 [2024-10-17 16:52:56.420005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.224 [2024-10-17 16:52:56.421896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.224 [2024-10-17 16:52:56.421935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:20.224 [2024-10-17 16:52:56.421958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.825 ms 00:38:20.224 [2024-10-17 16:52:56.421970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.225 [2024-10-17 16:52:56.459088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.225 [2024-10-17 16:52:56.459121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:38:20.225 [2024-10-17 16:52:56.459137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.159 ms 00:38:20.225 [2024-10-17 16:52:56.459148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.225 [2024-10-17 16:52:56.495426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.225 [2024-10-17 16:52:56.495471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:38:20.225 [2024-10-17 16:52:56.495485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.297 ms 00:38:20.225 [2024-10-17 16:52:56.495495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.485 [2024-10-17 16:52:56.530416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.485 [2024-10-17 16:52:56.530449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:20.485 [2024-10-17 16:52:56.530462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.939 ms 00:38:20.485 [2024-10-17 16:52:56.530473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.485 [2024-10-17 16:52:56.565791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.485 [2024-10-17 16:52:56.565825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:20.485 [2024-10-17 16:52:56.565838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.294 ms 00:38:20.485 [2024-10-17 16:52:56.565849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.485 [2024-10-17 16:52:56.565887] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:20.485 [2024-10-17 16:52:56.565904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:38:20.485 [2024-10-17 16:52:56.565918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:38:20.485 [2024-10-17 16:52:56.565930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.565940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.565952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.565963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.565974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.565985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.565996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:20.485 [2024-10-17 16:52:56.566472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.566982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:20.486 [2024-10-17 16:52:56.567001] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:20.486 [2024-10-17 16:52:56.567017] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3b62d67c-a34f-4c56-8033-432fbb454aa3 00:38:20.486 [2024-10-17 16:52:56.567028] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:38:20.486 [2024-10-17 16:52:56.567041] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:20.486 [2024-10-17 16:52:56.567051] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:20.486 [2024-10-17 16:52:56.567062] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:20.486 [2024-10-17 16:52:56.567071] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:20.486 [2024-10-17 16:52:56.567082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:20.486 [2024-10-17 16:52:56.567102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:20.486 [2024-10-17 16:52:56.567112] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:20.486 [2024-10-17 16:52:56.567121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:20.486 [2024-10-17 16:52:56.567132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.486 [2024-10-17 16:52:56.567143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:20.486 [2024-10-17 16:52:56.567153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.249 ms 00:38:20.486 [2024-10-17 16:52:56.567163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.486 [2024-10-17 16:52:56.586857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.486 [2024-10-17 16:52:56.586888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:20.486 [2024-10-17 16:52:56.586901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.690 ms 00:38:20.486 [2024-10-17 16:52:56.586912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.486 [2024-10-17 16:52:56.587453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:20.486 [2024-10-17 16:52:56.587470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:20.487 [2024-10-17 16:52:56.587482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:38:20.487 [2024-10-17 16:52:56.587498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.487 [2024-10-17 16:52:56.638637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.487 [2024-10-17 16:52:56.638672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:20.487 [2024-10-17 16:52:56.638685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.487 [2024-10-17 16:52:56.638695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.487 [2024-10-17 16:52:56.638760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.487 [2024-10-17 16:52:56.638772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:20.487 [2024-10-17 16:52:56.638783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.487 [2024-10-17 16:52:56.638800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.487 [2024-10-17 16:52:56.638867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.487 [2024-10-17 16:52:56.638881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:20.487 [2024-10-17 16:52:56.638892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.487 [2024-10-17 16:52:56.638911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.487 [2024-10-17 16:52:56.638930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.487 [2024-10-17 16:52:56.638941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:20.487 [2024-10-17 16:52:56.638952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.487 [2024-10-17 16:52:56.638962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.487 [2024-10-17 16:52:56.765194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.487 [2024-10-17 16:52:56.765250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:20.487 [2024-10-17 16:52:56.765266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.487 [2024-10-17 16:52:56.765278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.745 [2024-10-17 16:52:56.868023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.745 [2024-10-17 16:52:56.868080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:20.745 [2024-10-17 16:52:56.868097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.745 [2024-10-17 16:52:56.868107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.745 [2024-10-17 16:52:56.868202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.745 [2024-10-17 16:52:56.868216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:20.745 [2024-10-17 16:52:56.868227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.745 [2024-10-17 16:52:56.868238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.745 [2024-10-17 16:52:56.868285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.745 [2024-10-17 16:52:56.868297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:20.745 [2024-10-17 16:52:56.868307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.745 [2024-10-17 16:52:56.868318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.745 [2024-10-17 16:52:56.868437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.746 [2024-10-17 16:52:56.868450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:20.746 [2024-10-17 16:52:56.868461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.746 [2024-10-17 16:52:56.868471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.746 [2024-10-17 16:52:56.868507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.746 [2024-10-17 16:52:56.868520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:20.746 [2024-10-17 16:52:56.868547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.746 [2024-10-17 16:52:56.868557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.746 [2024-10-17 16:52:56.868600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.746 [2024-10-17 16:52:56.868616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:20.746 [2024-10-17 16:52:56.868627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.746 [2024-10-17 16:52:56.868638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.746 [2024-10-17 16:52:56.868682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:20.746 [2024-10-17 16:52:56.868695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:20.746 [2024-10-17 16:52:56.868725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:20.746 [2024-10-17 16:52:56.868735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:20.746 [2024-10-17 16:52:56.868886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 542.692 ms, result 0 00:38:21.741 00:38:21.741 00:38:21.741 16:52:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:38:23.645 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:38:23.645 16:52:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:38:23.645 16:52:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:38:23.645 16:52:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:23.645 16:52:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:38:23.645 16:52:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:38:23.903 16:52:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:38:23.903 16:52:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:38:23.903 Process with pid 78193 is not found 00:38:23.903 16:52:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78193 00:38:23.903 16:52:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78193 ']' 00:38:23.903 16:52:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 78193 00:38:23.903 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78193) - No such process 00:38:23.903 16:52:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 78193 is not found' 00:38:23.903 16:52:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:38:24.162 16:53:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:38:24.162 Remove shared memory files 00:38:24.162 16:53:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:24.162 16:53:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:38:24.162 16:53:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:38:24.162 16:53:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:38:24.162 16:53:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:24.162 16:53:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:38:24.162 00:38:24.162 real 3m28.228s 00:38:24.162 user 3m54.877s 00:38:24.162 sys 0m38.033s 00:38:24.162 16:53:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:38:24.162 16:53:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:38:24.162 ************************************ 00:38:24.162 END TEST ftl_dirty_shutdown 00:38:24.162 ************************************ 00:38:24.162 16:53:00 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:38:24.162 16:53:00 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:38:24.162 16:53:00 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:38:24.162 16:53:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:24.162 ************************************ 00:38:24.162 START TEST ftl_upgrade_shutdown 00:38:24.162 ************************************ 00:38:24.162 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:38:24.421 * Looking for test storage... 00:38:24.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:24.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.421 --rc genhtml_branch_coverage=1 00:38:24.421 --rc genhtml_function_coverage=1 00:38:24.421 --rc genhtml_legend=1 00:38:24.421 --rc geninfo_all_blocks=1 00:38:24.421 --rc geninfo_unexecuted_blocks=1 00:38:24.421 00:38:24.421 ' 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:24.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.421 --rc genhtml_branch_coverage=1 00:38:24.421 --rc genhtml_function_coverage=1 00:38:24.421 --rc genhtml_legend=1 00:38:24.421 --rc geninfo_all_blocks=1 00:38:24.421 --rc geninfo_unexecuted_blocks=1 00:38:24.421 00:38:24.421 ' 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:24.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.421 --rc genhtml_branch_coverage=1 00:38:24.421 --rc genhtml_function_coverage=1 00:38:24.421 --rc genhtml_legend=1 00:38:24.421 --rc geninfo_all_blocks=1 00:38:24.421 --rc geninfo_unexecuted_blocks=1 00:38:24.421 00:38:24.421 ' 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:24.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:24.421 --rc genhtml_branch_coverage=1 00:38:24.421 --rc genhtml_function_coverage=1 00:38:24.421 --rc genhtml_legend=1 00:38:24.421 --rc geninfo_all_blocks=1 00:38:24.421 --rc geninfo_unexecuted_blocks=1 00:38:24.421 00:38:24.421 ' 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:38:24.421 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80436 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80436 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80436 ']' 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:24.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:24.422 16:53:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:38:24.680 [2024-10-17 16:53:00.782882] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:38:24.680 [2024-10-17 16:53:00.783022] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80436 ] 00:38:24.680 [2024-10-17 16:53:00.944143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.938 [2024-10-17 16:53:01.059020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:38:25.872 16:53:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:38:26.131 16:53:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:38:26.131 16:53:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:38:26.131 16:53:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:38:26.131 16:53:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:38:26.131 16:53:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:38:26.131 16:53:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:38:26.131 16:53:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:38:26.131 16:53:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:38:26.131 16:53:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:38:26.131 { 00:38:26.131 "name": "basen1", 00:38:26.131 "aliases": [ 00:38:26.131 "4c8fe3c6-a32c-4444-a9e4-bcea3420a326" 00:38:26.131 ], 00:38:26.131 "product_name": "NVMe disk", 00:38:26.131 "block_size": 4096, 00:38:26.131 "num_blocks": 1310720, 00:38:26.131 "uuid": "4c8fe3c6-a32c-4444-a9e4-bcea3420a326", 00:38:26.131 "numa_id": -1, 00:38:26.131 "assigned_rate_limits": { 00:38:26.131 "rw_ios_per_sec": 0, 00:38:26.131 "rw_mbytes_per_sec": 0, 00:38:26.131 "r_mbytes_per_sec": 0, 00:38:26.131 "w_mbytes_per_sec": 0 00:38:26.131 }, 00:38:26.131 "claimed": true, 00:38:26.131 "claim_type": "read_many_write_one", 00:38:26.131 "zoned": false, 00:38:26.131 "supported_io_types": { 00:38:26.131 "read": true, 00:38:26.131 "write": true, 00:38:26.131 "unmap": true, 00:38:26.131 "flush": true, 00:38:26.131 "reset": true, 00:38:26.131 "nvme_admin": true, 00:38:26.131 "nvme_io": true, 00:38:26.131 "nvme_io_md": false, 00:38:26.131 "write_zeroes": true, 00:38:26.131 "zcopy": false, 00:38:26.131 "get_zone_info": false, 00:38:26.131 "zone_management": false, 00:38:26.131 "zone_append": false, 00:38:26.131 "compare": true, 00:38:26.131 "compare_and_write": false, 00:38:26.131 "abort": true, 00:38:26.131 "seek_hole": false, 00:38:26.131 "seek_data": false, 00:38:26.131 "copy": true, 00:38:26.131 "nvme_iov_md": false 00:38:26.131 }, 00:38:26.131 "driver_specific": { 00:38:26.131 "nvme": [ 00:38:26.131 { 00:38:26.131 "pci_address": "0000:00:11.0", 00:38:26.131 "trid": { 00:38:26.131 "trtype": "PCIe", 00:38:26.131 "traddr": "0000:00:11.0" 00:38:26.131 }, 00:38:26.131 "ctrlr_data": { 00:38:26.131 "cntlid": 0, 00:38:26.131 "vendor_id": "0x1b36", 00:38:26.131 "model_number": "QEMU NVMe Ctrl", 00:38:26.131 "serial_number": "12341", 00:38:26.131 "firmware_revision": "8.0.0", 00:38:26.131 "subnqn": "nqn.2019-08.org.qemu:12341", 00:38:26.131 "oacs": { 00:38:26.131 "security": 0, 00:38:26.131 "format": 1, 00:38:26.131 "firmware": 0, 00:38:26.131 "ns_manage": 1 00:38:26.131 }, 00:38:26.131 "multi_ctrlr": false, 00:38:26.131 "ana_reporting": false 00:38:26.131 }, 00:38:26.131 "vs": { 00:38:26.131 "nvme_version": "1.4" 00:38:26.131 }, 00:38:26.131 "ns_data": { 00:38:26.131 "id": 1, 00:38:26.131 "can_share": false 00:38:26.131 } 00:38:26.131 } 00:38:26.131 ], 00:38:26.131 "mp_policy": "active_passive" 00:38:26.131 } 00:38:26.131 } 00:38:26.131 ]' 00:38:26.131 16:53:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:38:26.389 16:53:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:38:26.389 16:53:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:38:26.389 16:53:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:38:26.389 16:53:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:38:26.389 16:53:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:38:26.389 16:53:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:38:26.389 16:53:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:38:26.389 16:53:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:38:26.389 16:53:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:26.389 16:53:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:38:26.647 16:53:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=f25f96c3-79dd-45dd-a09a-1e8ae50259c9 00:38:26.647 16:53:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:38:26.648 16:53:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f25f96c3-79dd-45dd-a09a-1e8ae50259c9 00:38:26.648 16:53:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:38:26.906 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=825d1887-18ef-4daa-b4cb-58dd317f9e70 00:38:26.906 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 825d1887-18ef-4daa-b4cb-58dd317f9e70 00:38:27.164 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=8a5a93f0-e461-4e4f-b814-1dcf09a4874a 00:38:27.164 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 8a5a93f0-e461-4e4f-b814-1dcf09a4874a ]] 00:38:27.164 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 8a5a93f0-e461-4e4f-b814-1dcf09a4874a 5120 00:38:27.164 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:38:27.164 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:38:27.164 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=8a5a93f0-e461-4e4f-b814-1dcf09a4874a 00:38:27.164 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:38:27.164 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 8a5a93f0-e461-4e4f-b814-1dcf09a4874a 00:38:27.164 16:53:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=8a5a93f0-e461-4e4f-b814-1dcf09a4874a 00:38:27.164 16:53:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:38:27.164 16:53:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:38:27.164 16:53:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:38:27.164 16:53:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a5a93f0-e461-4e4f-b814-1dcf09a4874a 00:38:27.422 16:53:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:38:27.422 { 00:38:27.422 "name": "8a5a93f0-e461-4e4f-b814-1dcf09a4874a", 00:38:27.422 "aliases": [ 00:38:27.422 "lvs/basen1p0" 00:38:27.422 ], 00:38:27.422 "product_name": "Logical Volume", 00:38:27.422 "block_size": 4096, 00:38:27.422 "num_blocks": 5242880, 00:38:27.422 "uuid": "8a5a93f0-e461-4e4f-b814-1dcf09a4874a", 00:38:27.422 "assigned_rate_limits": { 00:38:27.422 "rw_ios_per_sec": 0, 00:38:27.422 "rw_mbytes_per_sec": 0, 00:38:27.422 "r_mbytes_per_sec": 0, 00:38:27.422 "w_mbytes_per_sec": 0 00:38:27.422 }, 00:38:27.422 "claimed": false, 00:38:27.422 "zoned": false, 00:38:27.422 "supported_io_types": { 00:38:27.422 "read": true, 00:38:27.422 "write": true, 00:38:27.422 "unmap": true, 00:38:27.422 "flush": false, 00:38:27.422 "reset": true, 00:38:27.422 "nvme_admin": false, 00:38:27.422 "nvme_io": false, 00:38:27.422 "nvme_io_md": false, 00:38:27.422 "write_zeroes": true, 00:38:27.422 "zcopy": false, 00:38:27.422 "get_zone_info": false, 00:38:27.422 "zone_management": false, 00:38:27.422 "zone_append": false, 00:38:27.422 "compare": false, 00:38:27.422 "compare_and_write": false, 00:38:27.422 "abort": false, 00:38:27.422 "seek_hole": true, 00:38:27.422 "seek_data": true, 00:38:27.422 "copy": false, 00:38:27.422 "nvme_iov_md": false 00:38:27.422 }, 00:38:27.422 "driver_specific": { 00:38:27.422 "lvol": { 00:38:27.422 "lvol_store_uuid": "825d1887-18ef-4daa-b4cb-58dd317f9e70", 00:38:27.422 "base_bdev": "basen1", 00:38:27.422 "thin_provision": true, 00:38:27.423 "num_allocated_clusters": 0, 00:38:27.423 "snapshot": false, 00:38:27.423 "clone": false, 00:38:27.423 "esnap_clone": false 00:38:27.423 } 00:38:27.423 } 00:38:27.423 } 00:38:27.423 ]' 00:38:27.423 16:53:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:38:27.423 16:53:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:38:27.423 16:53:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:38:27.423 16:53:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:38:27.423 16:53:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:38:27.423 16:53:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:38:27.423 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:38:27.423 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:38:27.423 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:38:27.680 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:38:27.680 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:38:27.680 16:53:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:38:27.949 16:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:38:27.949 16:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:38:27.949 16:53:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 8a5a93f0-e461-4e4f-b814-1dcf09a4874a -c cachen1p0 --l2p_dram_limit 2 00:38:28.220 [2024-10-17 16:53:04.270198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:28.220 [2024-10-17 16:53:04.270253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:38:28.220 [2024-10-17 16:53:04.270272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:38:28.220 [2024-10-17 16:53:04.270283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:28.220 [2024-10-17 16:53:04.270348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:28.220 [2024-10-17 16:53:04.270363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:38:28.220 [2024-10-17 16:53:04.270377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:38:28.220 [2024-10-17 16:53:04.270387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:28.220 [2024-10-17 16:53:04.270412] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:38:28.220 [2024-10-17 16:53:04.271416] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:38:28.220 [2024-10-17 16:53:04.271453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:28.220 [2024-10-17 16:53:04.271464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:38:28.220 [2024-10-17 16:53:04.271480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.044 ms 00:38:28.220 [2024-10-17 16:53:04.271491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:28.220 [2024-10-17 16:53:04.271575] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 671ebd71-148e-486c-a3db-a0a82fbfcac9 00:38:28.220 [2024-10-17 16:53:04.273022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:28.220 [2024-10-17 16:53:04.273067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:38:28.220 [2024-10-17 16:53:04.273079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:38:28.220 [2024-10-17 16:53:04.273093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:28.220 [2024-10-17 16:53:04.280509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:28.220 [2024-10-17 16:53:04.280546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:38:28.220 [2024-10-17 16:53:04.280559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.373 ms 00:38:28.220 [2024-10-17 16:53:04.280571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:28.220 [2024-10-17 16:53:04.280635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:28.220 [2024-10-17 16:53:04.280653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:38:28.220 [2024-10-17 16:53:04.280665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:38:28.220 [2024-10-17 16:53:04.280681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:28.220 [2024-10-17 16:53:04.280746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:28.220 [2024-10-17 16:53:04.280762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:38:28.220 [2024-10-17 16:53:04.280774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:38:28.220 [2024-10-17 16:53:04.280786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:28.220 [2024-10-17 16:53:04.280811] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:38:28.220 [2024-10-17 16:53:04.285786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:28.220 [2024-10-17 16:53:04.285815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:38:28.220 [2024-10-17 16:53:04.285829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.987 ms 00:38:28.220 [2024-10-17 16:53:04.285844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:28.220 [2024-10-17 16:53:04.285890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:28.221 [2024-10-17 16:53:04.285901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:38:28.221 [2024-10-17 16:53:04.285915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:38:28.221 [2024-10-17 16:53:04.285925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:28.221 [2024-10-17 16:53:04.285982] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:38:28.221 [2024-10-17 16:53:04.286107] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:38:28.221 [2024-10-17 16:53:04.286128] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:38:28.221 [2024-10-17 16:53:04.286141] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:38:28.221 [2024-10-17 16:53:04.286173] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:38:28.221 [2024-10-17 16:53:04.286185] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:38:28.221 [2024-10-17 16:53:04.286199] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:38:28.221 [2024-10-17 16:53:04.286210] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:38:28.221 [2024-10-17 16:53:04.286222] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:38:28.221 [2024-10-17 16:53:04.286232] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:38:28.221 [2024-10-17 16:53:04.286245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:28.221 [2024-10-17 16:53:04.286258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:38:28.221 [2024-10-17 16:53:04.286271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.275 ms 00:38:28.221 [2024-10-17 16:53:04.286281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:28.221 [2024-10-17 16:53:04.286355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:28.221 [2024-10-17 16:53:04.286366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:38:28.221 [2024-10-17 16:53:04.286380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:38:28.221 [2024-10-17 16:53:04.286401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:28.221 [2024-10-17 16:53:04.286488] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:38:28.221 [2024-10-17 16:53:04.286504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:38:28.221 [2024-10-17 16:53:04.286520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:38:28.221 [2024-10-17 16:53:04.286531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:28.221 [2024-10-17 16:53:04.286544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:38:28.221 [2024-10-17 16:53:04.286554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:38:28.221 [2024-10-17 16:53:04.286566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:38:28.221 [2024-10-17 16:53:04.286576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:38:28.221 [2024-10-17 16:53:04.286589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:38:28.221 [2024-10-17 16:53:04.286598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:28.221 [2024-10-17 16:53:04.286610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:38:28.221 [2024-10-17 16:53:04.286619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:38:28.221 [2024-10-17 16:53:04.286630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:28.221 [2024-10-17 16:53:04.286641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:38:28.221 [2024-10-17 16:53:04.286654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:38:28.221 [2024-10-17 16:53:04.286663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:28.221 [2024-10-17 16:53:04.286677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:38:28.221 [2024-10-17 16:53:04.286686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:38:28.221 [2024-10-17 16:53:04.286708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:28.221 [2024-10-17 16:53:04.286719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:38:28.221 [2024-10-17 16:53:04.286732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:38:28.221 [2024-10-17 16:53:04.286741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:28.221 [2024-10-17 16:53:04.286754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:38:28.221 [2024-10-17 16:53:04.286763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:38:28.221 [2024-10-17 16:53:04.286775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:28.221 [2024-10-17 16:53:04.286784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:38:28.221 [2024-10-17 16:53:04.286796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:38:28.221 [2024-10-17 16:53:04.286805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:28.221 [2024-10-17 16:53:04.286817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:38:28.221 [2024-10-17 16:53:04.286827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:38:28.221 [2024-10-17 16:53:04.286839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:38:28.221 [2024-10-17 16:53:04.286848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:38:28.221 [2024-10-17 16:53:04.286890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:38:28.221 [2024-10-17 16:53:04.286900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:28.221 [2024-10-17 16:53:04.286911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:38:28.221 [2024-10-17 16:53:04.286921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:38:28.221 [2024-10-17 16:53:04.286933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:28.221 [2024-10-17 16:53:04.286943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:38:28.221 [2024-10-17 16:53:04.286954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:38:28.221 [2024-10-17 16:53:04.286964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:28.221 [2024-10-17 16:53:04.286975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:38:28.221 [2024-10-17 16:53:04.286985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:38:28.221 [2024-10-17 16:53:04.286997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:28.221 [2024-10-17 16:53:04.287006] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:38:28.221 [2024-10-17 16:53:04.287019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:38:28.221 [2024-10-17 16:53:04.287030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:38:28.221 [2024-10-17 16:53:04.287042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:38:28.221 [2024-10-17 16:53:04.287053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:38:28.221 [2024-10-17 16:53:04.287068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:38:28.221 [2024-10-17 16:53:04.287078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:38:28.221 [2024-10-17 16:53:04.287090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:38:28.221 [2024-10-17 16:53:04.287099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:38:28.221 [2024-10-17 16:53:04.287111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:38:28.221 [2024-10-17 16:53:04.287125] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:38:28.221 [2024-10-17 16:53:04.287141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:28.221 [2024-10-17 16:53:04.287153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:38:28.221 [2024-10-17 16:53:04.287166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:38:28.221 [2024-10-17 16:53:04.287176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:38:28.221 [2024-10-17 16:53:04.287189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:38:28.221 [2024-10-17 16:53:04.287200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:38:28.221 [2024-10-17 16:53:04.287213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:38:28.221 [2024-10-17 16:53:04.287224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:38:28.221 [2024-10-17 16:53:04.287237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:38:28.221 [2024-10-17 16:53:04.287247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:38:28.221 [2024-10-17 16:53:04.287262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:38:28.221 [2024-10-17 16:53:04.287272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:38:28.221 [2024-10-17 16:53:04.287285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:38:28.221 [2024-10-17 16:53:04.287295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:38:28.221 [2024-10-17 16:53:04.287308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:38:28.221 [2024-10-17 16:53:04.287319] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:38:28.221 [2024-10-17 16:53:04.287333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:28.221 [2024-10-17 16:53:04.287349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:28.221 [2024-10-17 16:53:04.287361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:38:28.221 [2024-10-17 16:53:04.287372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:38:28.221 [2024-10-17 16:53:04.287384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:38:28.221 [2024-10-17 16:53:04.287395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:28.221 [2024-10-17 16:53:04.287408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:38:28.221 [2024-10-17 16:53:04.287419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.968 ms 00:38:28.221 [2024-10-17 16:53:04.287432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:28.221 [2024-10-17 16:53:04.287475] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:38:28.221 [2024-10-17 16:53:04.287492] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:38:31.509 [2024-10-17 16:53:07.598157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.509 [2024-10-17 16:53:07.598227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:38:31.509 [2024-10-17 16:53:07.598244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3316.058 ms 00:38:31.509 [2024-10-17 16:53:07.598257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.509 [2024-10-17 16:53:07.635446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.509 [2024-10-17 16:53:07.635502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:38:31.509 [2024-10-17 16:53:07.635519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.825 ms 00:38:31.509 [2024-10-17 16:53:07.635548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.509 [2024-10-17 16:53:07.635634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.509 [2024-10-17 16:53:07.635651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:38:31.509 [2024-10-17 16:53:07.635663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:38:31.509 [2024-10-17 16:53:07.635679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.509 [2024-10-17 16:53:07.680203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.509 [2024-10-17 16:53:07.680253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:38:31.509 [2024-10-17 16:53:07.680268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.546 ms 00:38:31.509 [2024-10-17 16:53:07.680282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.509 [2024-10-17 16:53:07.680333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.509 [2024-10-17 16:53:07.680348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:38:31.509 [2024-10-17 16:53:07.680360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:38:31.509 [2024-10-17 16:53:07.680376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.509 [2024-10-17 16:53:07.680879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.509 [2024-10-17 16:53:07.680905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:38:31.509 [2024-10-17 16:53:07.680916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.430 ms 00:38:31.509 [2024-10-17 16:53:07.680929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.509 [2024-10-17 16:53:07.680980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.509 [2024-10-17 16:53:07.680994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:38:31.509 [2024-10-17 16:53:07.681005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:38:31.509 [2024-10-17 16:53:07.681020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.509 [2024-10-17 16:53:07.700923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.509 [2024-10-17 16:53:07.700965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:38:31.509 [2024-10-17 16:53:07.700995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.911 ms 00:38:31.509 [2024-10-17 16:53:07.701012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.509 [2024-10-17 16:53:07.713499] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:38:31.509 [2024-10-17 16:53:07.714541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.509 [2024-10-17 16:53:07.714570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:38:31.509 [2024-10-17 16:53:07.714585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.462 ms 00:38:31.509 [2024-10-17 16:53:07.714596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.509 [2024-10-17 16:53:07.753352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.509 [2024-10-17 16:53:07.753393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:38:31.509 [2024-10-17 16:53:07.753427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.783 ms 00:38:31.509 [2024-10-17 16:53:07.753439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.509 [2024-10-17 16:53:07.753535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.509 [2024-10-17 16:53:07.753549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:38:31.509 [2024-10-17 16:53:07.753566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:38:31.509 [2024-10-17 16:53:07.753580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.509 [2024-10-17 16:53:07.788717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.509 [2024-10-17 16:53:07.788756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:38:31.509 [2024-10-17 16:53:07.788773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.137 ms 00:38:31.509 [2024-10-17 16:53:07.788784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.768 [2024-10-17 16:53:07.824722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.768 [2024-10-17 16:53:07.824758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:38:31.768 [2024-10-17 16:53:07.824775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.942 ms 00:38:31.768 [2024-10-17 16:53:07.824785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.768 [2024-10-17 16:53:07.825464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.768 [2024-10-17 16:53:07.825491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:38:31.768 [2024-10-17 16:53:07.825506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.637 ms 00:38:31.768 [2024-10-17 16:53:07.825517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.768 [2024-10-17 16:53:07.924471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.768 [2024-10-17 16:53:07.924519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:38:31.768 [2024-10-17 16:53:07.924547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 99.056 ms 00:38:31.768 [2024-10-17 16:53:07.924559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.768 [2024-10-17 16:53:07.961825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.768 [2024-10-17 16:53:07.961867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:38:31.768 [2024-10-17 16:53:07.961897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.237 ms 00:38:31.768 [2024-10-17 16:53:07.961908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.768 [2024-10-17 16:53:07.998008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.768 [2024-10-17 16:53:07.998046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:38:31.768 [2024-10-17 16:53:07.998070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.112 ms 00:38:31.768 [2024-10-17 16:53:07.998080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.768 [2024-10-17 16:53:08.034455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.768 [2024-10-17 16:53:08.034492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:38:31.768 [2024-10-17 16:53:08.034525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.372 ms 00:38:31.768 [2024-10-17 16:53:08.034535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.769 [2024-10-17 16:53:08.034582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.769 [2024-10-17 16:53:08.034594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:38:31.769 [2024-10-17 16:53:08.034611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:38:31.769 [2024-10-17 16:53:08.034622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.769 [2024-10-17 16:53:08.034742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:38:31.769 [2024-10-17 16:53:08.034757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:38:31.769 [2024-10-17 16:53:08.034771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:38:31.769 [2024-10-17 16:53:08.034781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:38:31.769 [2024-10-17 16:53:08.035775] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3771.253 ms, result 0 00:38:31.769 { 00:38:31.769 "name": "ftl", 00:38:31.769 "uuid": "671ebd71-148e-486c-a3db-a0a82fbfcac9" 00:38:31.769 } 00:38:32.027 16:53:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:38:32.028 [2024-10-17 16:53:08.258586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:32.028 16:53:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:38:32.286 16:53:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:38:32.544 [2024-10-17 16:53:08.670344] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:38:32.544 16:53:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:38:32.802 [2024-10-17 16:53:08.875838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:32.802 16:53:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:38:33.062 Fill FTL, iteration 1 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80558 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80558 /var/tmp/spdk.tgt.sock 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 80558 ']' 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:33.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:33.062 16:53:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:38:33.062 [2024-10-17 16:53:09.308440] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:38:33.062 [2024-10-17 16:53:09.308564] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80558 ] 00:38:33.321 [2024-10-17 16:53:09.477719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.321 [2024-10-17 16:53:09.596243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:34.258 16:53:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:34.258 16:53:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:38:34.258 16:53:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:38:34.517 ftln1 00:38:34.517 16:53:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:38:34.517 16:53:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:38:34.776 16:53:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:38:34.776 16:53:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80558 00:38:34.776 16:53:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 80558 ']' 00:38:34.776 16:53:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 80558 00:38:34.776 16:53:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:38:34.776 16:53:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:34.776 16:53:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80558 00:38:34.776 16:53:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:34.776 16:53:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:34.776 killing process with pid 80558 00:38:34.776 16:53:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80558' 00:38:34.776 16:53:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 80558 00:38:34.776 16:53:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 80558 00:38:37.325 16:53:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:38:37.325 16:53:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:38:37.325 [2024-10-17 16:53:13.403458] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:38:37.325 [2024-10-17 16:53:13.403578] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80617 ] 00:38:37.325 [2024-10-17 16:53:13.576063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.594 [2024-10-17 16:53:13.691554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:38.970  [2024-10-17T16:53:16.205Z] Copying: 250/1024 [MB] (250 MBps) [2024-10-17T16:53:17.140Z] Copying: 500/1024 [MB] (250 MBps) [2024-10-17T16:53:18.516Z] Copying: 751/1024 [MB] (251 MBps) [2024-10-17T16:53:18.516Z] Copying: 1003/1024 [MB] (252 MBps) [2024-10-17T16:53:19.454Z] Copying: 1024/1024 [MB] (average 250 MBps) 00:38:43.155 00:38:43.155 16:53:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:38:43.155 Calculate MD5 checksum, iteration 1 00:38:43.155 16:53:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:38:43.155 16:53:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:43.155 16:53:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:43.155 16:53:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:43.155 16:53:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:43.155 16:53:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:43.155 16:53:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:38:43.155 [2024-10-17 16:53:19.419974] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:38:43.155 [2024-10-17 16:53:19.420109] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80681 ] 00:38:43.414 [2024-10-17 16:53:19.592326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.673 [2024-10-17 16:53:19.710978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:45.054  [2024-10-17T16:53:22.289Z] Copying: 531/1024 [MB] (531 MBps) [2024-10-17T16:53:23.226Z] Copying: 1024/1024 [MB] (average 528 MBps) 00:38:46.927 00:38:46.927 16:53:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:38:46.927 16:53:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:38:48.830 16:53:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:38:48.830 16:53:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a95aaf405e9fbf048f4c437814f0898c 00:38:48.830 16:53:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:38:48.830 16:53:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:38:48.830 Fill FTL, iteration 2 00:38:48.830 16:53:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:38:48.830 16:53:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:38:48.830 16:53:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:48.830 16:53:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:48.830 16:53:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:48.830 16:53:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:48.830 16:53:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:38:48.830 [2024-10-17 16:53:24.812268] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:38:48.830 [2024-10-17 16:53:24.812433] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80739 ] 00:38:48.830 [2024-10-17 16:53:24.996255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.830 [2024-10-17 16:53:25.114264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:50.732  [2024-10-17T16:53:27.598Z] Copying: 248/1024 [MB] (248 MBps) [2024-10-17T16:53:28.975Z] Copying: 502/1024 [MB] (254 MBps) [2024-10-17T16:53:29.911Z] Copying: 763/1024 [MB] (261 MBps) [2024-10-17T16:53:30.847Z] Copying: 1024/1024 [MB] (average 257 MBps) 00:38:54.548 00:38:54.548 16:53:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:38:54.548 Calculate MD5 checksum, iteration 2 00:38:54.548 16:53:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:38:54.548 16:53:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:54.548 16:53:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:38:54.548 16:53:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:38:54.548 16:53:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:38:54.548 16:53:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:38:54.548 16:53:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:38:54.806 [2024-10-17 16:53:30.904423] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:38:54.806 [2024-10-17 16:53:30.904553] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80803 ] 00:38:54.806 [2024-10-17 16:53:31.075373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:55.064 [2024-10-17 16:53:31.219058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:56.967  [2024-10-17T16:53:33.831Z] Copying: 654/1024 [MB] (654 MBps) [2024-10-17T16:53:35.206Z] Copying: 1024/1024 [MB] (average 643 MBps) 00:38:58.907 00:38:58.907 16:53:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:38:58.907 16:53:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:39:00.810 16:53:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:39:00.810 16:53:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=fa8f3390c950b9342ba821f52f1247a7 00:39:00.810 16:53:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:39:00.810 16:53:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:39:00.810 16:53:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:39:00.810 [2024-10-17 16:53:36.872815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:00.810 [2024-10-17 16:53:36.872874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:39:00.810 [2024-10-17 16:53:36.872890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:39:00.810 [2024-10-17 16:53:36.872902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:00.810 [2024-10-17 16:53:36.872931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:00.810 [2024-10-17 16:53:36.872943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:39:00.810 [2024-10-17 16:53:36.872954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:39:00.810 [2024-10-17 16:53:36.872965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:00.810 [2024-10-17 16:53:36.872990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:00.810 [2024-10-17 16:53:36.873002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:39:00.810 [2024-10-17 16:53:36.873013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:39:00.810 [2024-10-17 16:53:36.873023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:00.810 [2024-10-17 16:53:36.873085] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.268 ms, result 0 00:39:00.810 true 00:39:00.810 16:53:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:39:00.810 { 00:39:00.810 "name": "ftl", 00:39:00.810 "properties": [ 00:39:00.810 { 00:39:00.810 "name": "superblock_version", 00:39:00.810 "value": 5, 00:39:00.811 "read-only": true 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "name": "base_device", 00:39:00.811 "bands": [ 00:39:00.811 { 00:39:00.811 "id": 0, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 1, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 2, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 3, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 4, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 5, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 6, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 7, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 8, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 9, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 10, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 11, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 12, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 13, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 14, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 15, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 16, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 17, 00:39:00.811 "state": "FREE", 00:39:00.811 "validity": 0.0 00:39:00.811 } 00:39:00.811 ], 00:39:00.811 "read-only": true 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "name": "cache_device", 00:39:00.811 "type": "bdev", 00:39:00.811 "chunks": [ 00:39:00.811 { 00:39:00.811 "id": 0, 00:39:00.811 "state": "INACTIVE", 00:39:00.811 "utilization": 0.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 1, 00:39:00.811 "state": "CLOSED", 00:39:00.811 "utilization": 1.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 2, 00:39:00.811 "state": "CLOSED", 00:39:00.811 "utilization": 1.0 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 3, 00:39:00.811 "state": "OPEN", 00:39:00.811 "utilization": 0.001953125 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "id": 4, 00:39:00.811 "state": "OPEN", 00:39:00.811 "utilization": 0.0 00:39:00.811 } 00:39:00.811 ], 00:39:00.811 "read-only": true 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "name": "verbose_mode", 00:39:00.811 "value": true, 00:39:00.811 "unit": "", 00:39:00.811 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:39:00.811 }, 00:39:00.811 { 00:39:00.811 "name": "prep_upgrade_on_shutdown", 00:39:00.811 "value": false, 00:39:00.811 "unit": "", 00:39:00.811 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:39:00.811 } 00:39:00.811 ] 00:39:00.811 } 00:39:00.811 16:53:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:39:01.070 [2024-10-17 16:53:37.264777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:01.070 [2024-10-17 16:53:37.264967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:39:01.070 [2024-10-17 16:53:37.265090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:39:01.070 [2024-10-17 16:53:37.265128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:01.070 [2024-10-17 16:53:37.265190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:01.070 [2024-10-17 16:53:37.265224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:39:01.070 [2024-10-17 16:53:37.265255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:39:01.070 [2024-10-17 16:53:37.265285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:01.070 [2024-10-17 16:53:37.265388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:01.070 [2024-10-17 16:53:37.265404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:39:01.070 [2024-10-17 16:53:37.265416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:39:01.070 [2024-10-17 16:53:37.265426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:01.070 [2024-10-17 16:53:37.265491] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.701 ms, result 0 00:39:01.070 true 00:39:01.070 16:53:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:39:01.070 16:53:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:39:01.070 16:53:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:39:01.328 16:53:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:39:01.328 16:53:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:39:01.328 16:53:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:39:01.586 [2024-10-17 16:53:37.692774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:01.586 [2024-10-17 16:53:37.693012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:39:01.586 [2024-10-17 16:53:37.693124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:39:01.586 [2024-10-17 16:53:37.693162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:01.586 [2024-10-17 16:53:37.693228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:01.586 [2024-10-17 16:53:37.693344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:39:01.586 [2024-10-17 16:53:37.693380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:39:01.586 [2024-10-17 16:53:37.693410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:01.586 [2024-10-17 16:53:37.693515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:01.586 [2024-10-17 16:53:37.693531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:39:01.586 [2024-10-17 16:53:37.693542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:39:01.586 [2024-10-17 16:53:37.693552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:01.586 [2024-10-17 16:53:37.693616] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.831 ms, result 0 00:39:01.586 true 00:39:01.586 16:53:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:39:01.843 { 00:39:01.843 "name": "ftl", 00:39:01.843 "properties": [ 00:39:01.843 { 00:39:01.843 "name": "superblock_version", 00:39:01.843 "value": 5, 00:39:01.843 "read-only": true 00:39:01.843 }, 00:39:01.843 { 00:39:01.843 "name": "base_device", 00:39:01.843 "bands": [ 00:39:01.843 { 00:39:01.843 "id": 0, 00:39:01.843 "state": "FREE", 00:39:01.843 "validity": 0.0 00:39:01.843 }, 00:39:01.843 { 00:39:01.843 "id": 1, 00:39:01.843 "state": "FREE", 00:39:01.843 "validity": 0.0 00:39:01.843 }, 00:39:01.843 { 00:39:01.843 "id": 2, 00:39:01.843 "state": "FREE", 00:39:01.843 "validity": 0.0 00:39:01.843 }, 00:39:01.843 { 00:39:01.843 "id": 3, 00:39:01.843 "state": "FREE", 00:39:01.843 "validity": 0.0 00:39:01.843 }, 00:39:01.843 { 00:39:01.843 "id": 4, 00:39:01.843 "state": "FREE", 00:39:01.843 "validity": 0.0 00:39:01.843 }, 00:39:01.843 { 00:39:01.843 "id": 5, 00:39:01.843 "state": "FREE", 00:39:01.843 "validity": 0.0 00:39:01.843 }, 00:39:01.843 { 00:39:01.843 "id": 6, 00:39:01.843 "state": "FREE", 00:39:01.843 "validity": 0.0 00:39:01.843 }, 00:39:01.843 { 00:39:01.843 "id": 7, 00:39:01.843 "state": "FREE", 00:39:01.843 "validity": 0.0 00:39:01.843 }, 00:39:01.843 { 00:39:01.843 "id": 8, 00:39:01.844 "state": "FREE", 00:39:01.844 "validity": 0.0 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "id": 9, 00:39:01.844 "state": "FREE", 00:39:01.844 "validity": 0.0 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "id": 10, 00:39:01.844 "state": "FREE", 00:39:01.844 "validity": 0.0 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "id": 11, 00:39:01.844 "state": "FREE", 00:39:01.844 "validity": 0.0 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "id": 12, 00:39:01.844 "state": "FREE", 00:39:01.844 "validity": 0.0 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "id": 13, 00:39:01.844 "state": "FREE", 00:39:01.844 "validity": 0.0 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "id": 14, 00:39:01.844 "state": "FREE", 00:39:01.844 "validity": 0.0 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "id": 15, 00:39:01.844 "state": "FREE", 00:39:01.844 "validity": 0.0 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "id": 16, 00:39:01.844 "state": "FREE", 00:39:01.844 "validity": 0.0 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "id": 17, 00:39:01.844 "state": "FREE", 00:39:01.844 "validity": 0.0 00:39:01.844 } 00:39:01.844 ], 00:39:01.844 "read-only": true 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "name": "cache_device", 00:39:01.844 "type": "bdev", 00:39:01.844 "chunks": [ 00:39:01.844 { 00:39:01.844 "id": 0, 00:39:01.844 "state": "INACTIVE", 00:39:01.844 "utilization": 0.0 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "id": 1, 00:39:01.844 "state": "CLOSED", 00:39:01.844 "utilization": 1.0 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "id": 2, 00:39:01.844 "state": "CLOSED", 00:39:01.844 "utilization": 1.0 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "id": 3, 00:39:01.844 "state": "OPEN", 00:39:01.844 "utilization": 0.001953125 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "id": 4, 00:39:01.844 "state": "OPEN", 00:39:01.844 "utilization": 0.0 00:39:01.844 } 00:39:01.844 ], 00:39:01.844 "read-only": true 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "name": "verbose_mode", 00:39:01.844 "value": true, 00:39:01.844 "unit": "", 00:39:01.844 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:39:01.844 }, 00:39:01.844 { 00:39:01.844 "name": "prep_upgrade_on_shutdown", 00:39:01.844 "value": true, 00:39:01.844 "unit": "", 00:39:01.844 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:39:01.844 } 00:39:01.844 ] 00:39:01.844 } 00:39:01.844 16:53:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:39:01.844 16:53:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80436 ]] 00:39:01.844 16:53:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80436 00:39:01.844 16:53:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 80436 ']' 00:39:01.844 16:53:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 80436 00:39:01.844 16:53:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:39:01.844 16:53:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:01.844 16:53:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80436 00:39:01.844 killing process with pid 80436 00:39:01.844 16:53:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:01.844 16:53:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:01.844 16:53:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80436' 00:39:01.844 16:53:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 80436 00:39:01.844 16:53:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 80436 00:39:02.782 [2024-10-17 16:53:39.067556] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:39:03.041 [2024-10-17 16:53:39.088172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:03.041 [2024-10-17 16:53:39.088223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:39:03.041 [2024-10-17 16:53:39.088238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:39:03.041 [2024-10-17 16:53:39.088249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:03.041 [2024-10-17 16:53:39.088272] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:39:03.041 [2024-10-17 16:53:39.092440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:03.041 [2024-10-17 16:53:39.092485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:39:03.041 [2024-10-17 16:53:39.092500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.158 ms 00:39:03.041 [2024-10-17 16:53:39.092510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.158 [2024-10-17 16:53:46.205255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:11.158 [2024-10-17 16:53:46.205313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:39:11.158 [2024-10-17 16:53:46.205330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7124.270 ms 00:39:11.158 [2024-10-17 16:53:46.205341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.158 [2024-10-17 16:53:46.208535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:11.158 [2024-10-17 16:53:46.208605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:39:11.158 [2024-10-17 16:53:46.208626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.166 ms 00:39:11.158 [2024-10-17 16:53:46.208643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.158 [2024-10-17 16:53:46.209571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:11.158 [2024-10-17 16:53:46.209601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:39:11.158 [2024-10-17 16:53:46.209621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.885 ms 00:39:11.158 [2024-10-17 16:53:46.209631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.158 [2024-10-17 16:53:46.224160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:11.158 [2024-10-17 16:53:46.224197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:39:11.158 [2024-10-17 16:53:46.224210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.497 ms 00:39:11.158 [2024-10-17 16:53:46.224220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.158 [2024-10-17 16:53:46.232871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:11.158 [2024-10-17 16:53:46.232908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:39:11.158 [2024-10-17 16:53:46.232921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.628 ms 00:39:11.158 [2024-10-17 16:53:46.232947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.158 [2024-10-17 16:53:46.233045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:11.158 [2024-10-17 16:53:46.233059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:39:11.158 [2024-10-17 16:53:46.233076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:39:11.158 [2024-10-17 16:53:46.233086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.158 [2024-10-17 16:53:46.247015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:11.158 [2024-10-17 16:53:46.247051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:39:11.158 [2024-10-17 16:53:46.247063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.929 ms 00:39:11.158 [2024-10-17 16:53:46.247073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.158 [2024-10-17 16:53:46.261685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:11.158 [2024-10-17 16:53:46.261728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:39:11.158 [2024-10-17 16:53:46.261741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.590 ms 00:39:11.158 [2024-10-17 16:53:46.261766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.158 [2024-10-17 16:53:46.275328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:11.158 [2024-10-17 16:53:46.275363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:39:11.158 [2024-10-17 16:53:46.275375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.546 ms 00:39:11.158 [2024-10-17 16:53:46.275384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.158 [2024-10-17 16:53:46.289514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:11.158 [2024-10-17 16:53:46.289658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:39:11.158 [2024-10-17 16:53:46.289679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.089 ms 00:39:11.158 [2024-10-17 16:53:46.289705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.158 [2024-10-17 16:53:46.289753] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:39:11.158 [2024-10-17 16:53:46.289769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:39:11.158 [2024-10-17 16:53:46.289782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:39:11.159 [2024-10-17 16:53:46.289806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:39:11.159 [2024-10-17 16:53:46.289818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:11.159 [2024-10-17 16:53:46.289980] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:39:11.159 [2024-10-17 16:53:46.289998] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 671ebd71-148e-486c-a3db-a0a82fbfcac9 00:39:11.159 [2024-10-17 16:53:46.290009] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:39:11.159 [2024-10-17 16:53:46.290019] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:39:11.159 [2024-10-17 16:53:46.290029] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:39:11.159 [2024-10-17 16:53:46.290040] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:39:11.159 [2024-10-17 16:53:46.290050] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:39:11.159 [2024-10-17 16:53:46.290061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:39:11.159 [2024-10-17 16:53:46.290071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:39:11.159 [2024-10-17 16:53:46.290080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:39:11.159 [2024-10-17 16:53:46.290089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:39:11.159 [2024-10-17 16:53:46.290101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:11.159 [2024-10-17 16:53:46.290116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:39:11.159 [2024-10-17 16:53:46.290127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.349 ms 00:39:11.159 [2024-10-17 16:53:46.290141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.159 [2024-10-17 16:53:46.309109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:11.159 [2024-10-17 16:53:46.309269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:39:11.159 [2024-10-17 16:53:46.309392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.967 ms 00:39:11.159 [2024-10-17 16:53:46.309430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.159 [2024-10-17 16:53:46.310063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:11.159 [2024-10-17 16:53:46.310163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:39:11.159 [2024-10-17 16:53:46.310233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.590 ms 00:39:11.159 [2024-10-17 16:53:46.310267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.159 [2024-10-17 16:53:46.372374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:11.159 [2024-10-17 16:53:46.372536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:39:11.159 [2024-10-17 16:53:46.372669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:11.159 [2024-10-17 16:53:46.372737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.159 [2024-10-17 16:53:46.372800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:11.159 [2024-10-17 16:53:46.372837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:39:11.159 [2024-10-17 16:53:46.372870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:11.159 [2024-10-17 16:53:46.372954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.159 [2024-10-17 16:53:46.373062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:11.159 [2024-10-17 16:53:46.373102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:39:11.159 [2024-10-17 16:53:46.373133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:11.159 [2024-10-17 16:53:46.373263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.159 [2024-10-17 16:53:46.373304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:11.159 [2024-10-17 16:53:46.373342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:39:11.159 [2024-10-17 16:53:46.373418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:11.159 [2024-10-17 16:53:46.373452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.159 [2024-10-17 16:53:46.494984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:11.159 [2024-10-17 16:53:46.495176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:39:11.159 [2024-10-17 16:53:46.495263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:11.159 [2024-10-17 16:53:46.495300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.159 [2024-10-17 16:53:46.590643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:11.159 [2024-10-17 16:53:46.590882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:39:11.159 [2024-10-17 16:53:46.591058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:11.159 [2024-10-17 16:53:46.591096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.159 [2024-10-17 16:53:46.591228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:11.159 [2024-10-17 16:53:46.591339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:39:11.159 [2024-10-17 16:53:46.591418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:11.159 [2024-10-17 16:53:46.591447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.159 [2024-10-17 16:53:46.591519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:11.159 [2024-10-17 16:53:46.591553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:39:11.159 [2024-10-17 16:53:46.591590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:11.159 [2024-10-17 16:53:46.591620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.159 [2024-10-17 16:53:46.591847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:11.159 [2024-10-17 16:53:46.591894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:39:11.159 [2024-10-17 16:53:46.592100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:11.159 [2024-10-17 16:53:46.592137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.159 [2024-10-17 16:53:46.592208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:11.159 [2024-10-17 16:53:46.592244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:39:11.159 [2024-10-17 16:53:46.592326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:11.159 [2024-10-17 16:53:46.592367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.159 [2024-10-17 16:53:46.592432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:11.159 [2024-10-17 16:53:46.592465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:39:11.159 [2024-10-17 16:53:46.592496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:11.159 [2024-10-17 16:53:46.592610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.159 [2024-10-17 16:53:46.592709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:11.160 [2024-10-17 16:53:46.592750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:39:11.160 [2024-10-17 16:53:46.592787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:11.160 [2024-10-17 16:53:46.592888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:11.160 [2024-10-17 16:53:46.593051] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7517.041 ms, result 0 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81003 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81003 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81003 ']' 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:13.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:13.701 16:53:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:39:13.701 [2024-10-17 16:53:49.842294] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:39:13.701 [2024-10-17 16:53:49.842549] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81003 ] 00:39:13.960 [2024-10-17 16:53:50.014162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.960 [2024-10-17 16:53:50.132406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:14.894 [2024-10-17 16:53:51.059668] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:39:14.894 [2024-10-17 16:53:51.059747] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:39:15.153 [2024-10-17 16:53:51.206743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:15.153 [2024-10-17 16:53:51.206790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:39:15.153 [2024-10-17 16:53:51.206806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:39:15.153 [2024-10-17 16:53:51.206816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:15.153 [2024-10-17 16:53:51.206869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:15.153 [2024-10-17 16:53:51.206882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:39:15.153 [2024-10-17 16:53:51.206892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:39:15.153 [2024-10-17 16:53:51.206902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:15.153 [2024-10-17 16:53:51.206932] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:39:15.153 [2024-10-17 16:53:51.207996] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:39:15.153 [2024-10-17 16:53:51.208028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:15.153 [2024-10-17 16:53:51.208040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:39:15.153 [2024-10-17 16:53:51.208051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.109 ms 00:39:15.153 [2024-10-17 16:53:51.208061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:15.153 [2024-10-17 16:53:51.209686] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:39:15.153 [2024-10-17 16:53:51.228504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:15.153 [2024-10-17 16:53:51.228548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:39:15.153 [2024-10-17 16:53:51.228588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.848 ms 00:39:15.153 [2024-10-17 16:53:51.228611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:15.153 [2024-10-17 16:53:51.228685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:15.153 [2024-10-17 16:53:51.228712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:39:15.153 [2024-10-17 16:53:51.228725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:39:15.153 [2024-10-17 16:53:51.228735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:15.153 [2024-10-17 16:53:51.235642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:15.153 [2024-10-17 16:53:51.235672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:39:15.153 [2024-10-17 16:53:51.235689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.835 ms 00:39:15.153 [2024-10-17 16:53:51.235709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:15.153 [2024-10-17 16:53:51.235790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:15.153 [2024-10-17 16:53:51.235804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:39:15.153 [2024-10-17 16:53:51.235816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:39:15.153 [2024-10-17 16:53:51.235826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:15.153 [2024-10-17 16:53:51.235875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:15.153 [2024-10-17 16:53:51.235887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:39:15.153 [2024-10-17 16:53:51.235898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:39:15.153 [2024-10-17 16:53:51.235911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:15.153 [2024-10-17 16:53:51.235939] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:39:15.153 [2024-10-17 16:53:51.240690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:15.153 [2024-10-17 16:53:51.240733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:39:15.153 [2024-10-17 16:53:51.240746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.765 ms 00:39:15.153 [2024-10-17 16:53:51.240772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:15.153 [2024-10-17 16:53:51.240818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:15.153 [2024-10-17 16:53:51.240829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:39:15.153 [2024-10-17 16:53:51.240839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:39:15.153 [2024-10-17 16:53:51.240849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:15.153 [2024-10-17 16:53:51.240906] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:39:15.153 [2024-10-17 16:53:51.240930] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:39:15.153 [2024-10-17 16:53:51.240969] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:39:15.153 [2024-10-17 16:53:51.240988] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:39:15.153 [2024-10-17 16:53:51.241078] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:39:15.153 [2024-10-17 16:53:51.241092] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:39:15.153 [2024-10-17 16:53:51.241105] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:39:15.153 [2024-10-17 16:53:51.241127] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:39:15.153 [2024-10-17 16:53:51.241139] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:39:15.153 [2024-10-17 16:53:51.241149] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:39:15.153 [2024-10-17 16:53:51.241163] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:39:15.153 [2024-10-17 16:53:51.241172] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:39:15.153 [2024-10-17 16:53:51.241182] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:39:15.153 [2024-10-17 16:53:51.241192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:15.153 [2024-10-17 16:53:51.241202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:39:15.153 [2024-10-17 16:53:51.241212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.290 ms 00:39:15.153 [2024-10-17 16:53:51.241222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:15.153 [2024-10-17 16:53:51.241296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:15.153 [2024-10-17 16:53:51.241307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:39:15.153 [2024-10-17 16:53:51.241316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:39:15.153 [2024-10-17 16:53:51.241326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:15.153 [2024-10-17 16:53:51.241430] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:39:15.153 [2024-10-17 16:53:51.241443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:39:15.153 [2024-10-17 16:53:51.241454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:39:15.153 [2024-10-17 16:53:51.241465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:15.153 [2024-10-17 16:53:51.241475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:39:15.153 [2024-10-17 16:53:51.241486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:39:15.153 [2024-10-17 16:53:51.241496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:39:15.153 [2024-10-17 16:53:51.241506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:39:15.153 [2024-10-17 16:53:51.241515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:39:15.153 [2024-10-17 16:53:51.241524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:15.153 [2024-10-17 16:53:51.241533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:39:15.153 [2024-10-17 16:53:51.241544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:39:15.153 [2024-10-17 16:53:51.241553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:15.153 [2024-10-17 16:53:51.241562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:39:15.153 [2024-10-17 16:53:51.241571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:39:15.153 [2024-10-17 16:53:51.241580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:15.153 [2024-10-17 16:53:51.241588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:39:15.153 [2024-10-17 16:53:51.241598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:39:15.153 [2024-10-17 16:53:51.241607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:15.153 [2024-10-17 16:53:51.241616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:39:15.153 [2024-10-17 16:53:51.241625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:39:15.153 [2024-10-17 16:53:51.241634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:39:15.153 [2024-10-17 16:53:51.241643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:39:15.153 [2024-10-17 16:53:51.241652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:39:15.153 [2024-10-17 16:53:51.241661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:39:15.153 [2024-10-17 16:53:51.241680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:39:15.153 [2024-10-17 16:53:51.241689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:39:15.153 [2024-10-17 16:53:51.241710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:39:15.153 [2024-10-17 16:53:51.241720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:39:15.153 [2024-10-17 16:53:51.241729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:39:15.153 [2024-10-17 16:53:51.241738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:39:15.153 [2024-10-17 16:53:51.241747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:39:15.153 [2024-10-17 16:53:51.241756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:39:15.153 [2024-10-17 16:53:51.241765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:15.153 [2024-10-17 16:53:51.241775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:39:15.153 [2024-10-17 16:53:51.241784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:39:15.153 [2024-10-17 16:53:51.241793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:15.154 [2024-10-17 16:53:51.241802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:39:15.154 [2024-10-17 16:53:51.241811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:39:15.154 [2024-10-17 16:53:51.241820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:15.154 [2024-10-17 16:53:51.241829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:39:15.154 [2024-10-17 16:53:51.241838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:39:15.154 [2024-10-17 16:53:51.241846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:15.154 [2024-10-17 16:53:51.241856] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:39:15.154 [2024-10-17 16:53:51.241866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:39:15.154 [2024-10-17 16:53:51.241875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:39:15.154 [2024-10-17 16:53:51.241885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:15.154 [2024-10-17 16:53:51.241894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:39:15.154 [2024-10-17 16:53:51.241904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:39:15.154 [2024-10-17 16:53:51.241913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:39:15.154 [2024-10-17 16:53:51.241922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:39:15.154 [2024-10-17 16:53:51.241931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:39:15.154 [2024-10-17 16:53:51.241940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:39:15.154 [2024-10-17 16:53:51.241951] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:39:15.154 [2024-10-17 16:53:51.241973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:15.154 [2024-10-17 16:53:51.241985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:39:15.154 [2024-10-17 16:53:51.241995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:39:15.154 [2024-10-17 16:53:51.242005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:39:15.154 [2024-10-17 16:53:51.242014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:39:15.154 [2024-10-17 16:53:51.242025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:39:15.154 [2024-10-17 16:53:51.242035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:39:15.154 [2024-10-17 16:53:51.242045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:39:15.154 [2024-10-17 16:53:51.242055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:39:15.154 [2024-10-17 16:53:51.242066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:39:15.154 [2024-10-17 16:53:51.242077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:39:15.154 [2024-10-17 16:53:51.242087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:39:15.154 [2024-10-17 16:53:51.242097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:39:15.154 [2024-10-17 16:53:51.242108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:39:15.154 [2024-10-17 16:53:51.242118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:39:15.154 [2024-10-17 16:53:51.242129] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:39:15.154 [2024-10-17 16:53:51.242140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:15.154 [2024-10-17 16:53:51.242150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:15.154 [2024-10-17 16:53:51.242160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:39:15.154 [2024-10-17 16:53:51.242170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:39:15.154 [2024-10-17 16:53:51.242180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:39:15.154 [2024-10-17 16:53:51.242191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:15.154 [2024-10-17 16:53:51.242201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:39:15.154 [2024-10-17 16:53:51.242211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.816 ms 00:39:15.154 [2024-10-17 16:53:51.242221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:15.154 [2024-10-17 16:53:51.242267] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:39:15.154 [2024-10-17 16:53:51.242280] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:39:18.436 [2024-10-17 16:53:54.441377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.436 [2024-10-17 16:53:54.441445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:39:18.436 [2024-10-17 16:53:54.441462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3204.304 ms 00:39:18.436 [2024-10-17 16:53:54.441474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.436 [2024-10-17 16:53:54.479581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.436 [2024-10-17 16:53:54.479850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:39:18.436 [2024-10-17 16:53:54.479877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.842 ms 00:39:18.436 [2024-10-17 16:53:54.479890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.436 [2024-10-17 16:53:54.479994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.436 [2024-10-17 16:53:54.480008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:39:18.436 [2024-10-17 16:53:54.480026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:39:18.436 [2024-10-17 16:53:54.480036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.436 [2024-10-17 16:53:54.525407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.436 [2024-10-17 16:53:54.525451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:39:18.436 [2024-10-17 16:53:54.525465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.399 ms 00:39:18.436 [2024-10-17 16:53:54.525476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.436 [2024-10-17 16:53:54.525529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.436 [2024-10-17 16:53:54.525541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:39:18.436 [2024-10-17 16:53:54.525552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:39:18.436 [2024-10-17 16:53:54.525562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.436 [2024-10-17 16:53:54.526107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.436 [2024-10-17 16:53:54.526122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:39:18.436 [2024-10-17 16:53:54.526134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.471 ms 00:39:18.436 [2024-10-17 16:53:54.526144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.436 [2024-10-17 16:53:54.526189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.436 [2024-10-17 16:53:54.526204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:39:18.436 [2024-10-17 16:53:54.526214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:39:18.436 [2024-10-17 16:53:54.526224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.436 [2024-10-17 16:53:54.547057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.436 [2024-10-17 16:53:54.547097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:39:18.436 [2024-10-17 16:53:54.547112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.844 ms 00:39:18.436 [2024-10-17 16:53:54.547122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.436 [2024-10-17 16:53:54.565907] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:39:18.436 [2024-10-17 16:53:54.565955] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:39:18.436 [2024-10-17 16:53:54.565970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.436 [2024-10-17 16:53:54.565997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:39:18.436 [2024-10-17 16:53:54.566008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.756 ms 00:39:18.436 [2024-10-17 16:53:54.566018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.436 [2024-10-17 16:53:54.585765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.436 [2024-10-17 16:53:54.585820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:39:18.436 [2024-10-17 16:53:54.585833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.734 ms 00:39:18.436 [2024-10-17 16:53:54.585844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.436 [2024-10-17 16:53:54.603166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.436 [2024-10-17 16:53:54.603202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:39:18.437 [2024-10-17 16:53:54.603214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.301 ms 00:39:18.437 [2024-10-17 16:53:54.603240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.437 [2024-10-17 16:53:54.621133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.437 [2024-10-17 16:53:54.621169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:39:18.437 [2024-10-17 16:53:54.621182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.879 ms 00:39:18.437 [2024-10-17 16:53:54.621192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.437 [2024-10-17 16:53:54.622029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.437 [2024-10-17 16:53:54.622061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:39:18.437 [2024-10-17 16:53:54.622077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.729 ms 00:39:18.437 [2024-10-17 16:53:54.622088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.437 [2024-10-17 16:53:54.718073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.437 [2024-10-17 16:53:54.718126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:39:18.437 [2024-10-17 16:53:54.718151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 96.115 ms 00:39:18.437 [2024-10-17 16:53:54.718162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.437 [2024-10-17 16:53:54.730544] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:39:18.437 [2024-10-17 16:53:54.731729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.437 [2024-10-17 16:53:54.731759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:39:18.437 [2024-10-17 16:53:54.731773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.530 ms 00:39:18.437 [2024-10-17 16:53:54.731784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.437 [2024-10-17 16:53:54.731878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.437 [2024-10-17 16:53:54.731892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:39:18.437 [2024-10-17 16:53:54.731907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:39:18.437 [2024-10-17 16:53:54.731917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.437 [2024-10-17 16:53:54.731980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.437 [2024-10-17 16:53:54.731992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:39:18.437 [2024-10-17 16:53:54.732003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:39:18.437 [2024-10-17 16:53:54.732013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.437 [2024-10-17 16:53:54.732035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.437 [2024-10-17 16:53:54.732047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:39:18.437 [2024-10-17 16:53:54.732057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:39:18.437 [2024-10-17 16:53:54.732068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.437 [2024-10-17 16:53:54.732116] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:39:18.437 [2024-10-17 16:53:54.732129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.437 [2024-10-17 16:53:54.732139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:39:18.437 [2024-10-17 16:53:54.732150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:39:18.437 [2024-10-17 16:53:54.732160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.695 [2024-10-17 16:53:54.768233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.695 [2024-10-17 16:53:54.768276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:39:18.695 [2024-10-17 16:53:54.768296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.108 ms 00:39:18.695 [2024-10-17 16:53:54.768306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.695 [2024-10-17 16:53:54.768388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.695 [2024-10-17 16:53:54.768401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:39:18.695 [2024-10-17 16:53:54.768413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:39:18.695 [2024-10-17 16:53:54.768423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.695 [2024-10-17 16:53:54.769620] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3568.224 ms, result 0 00:39:18.695 [2024-10-17 16:53:54.784624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:18.695 [2024-10-17 16:53:54.800562] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:39:18.695 [2024-10-17 16:53:54.810090] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:18.695 16:53:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:18.695 16:53:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:39:18.695 16:53:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:39:18.695 16:53:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:39:18.695 16:53:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:39:18.954 [2024-10-17 16:53:55.029815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.954 [2024-10-17 16:53:55.030041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:39:18.954 [2024-10-17 16:53:55.030067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:39:18.954 [2024-10-17 16:53:55.030078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.954 [2024-10-17 16:53:55.030124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.954 [2024-10-17 16:53:55.030135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:39:18.954 [2024-10-17 16:53:55.030146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:39:18.954 [2024-10-17 16:53:55.030156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.954 [2024-10-17 16:53:55.030178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:18.954 [2024-10-17 16:53:55.030188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:39:18.954 [2024-10-17 16:53:55.030199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:39:18.954 [2024-10-17 16:53:55.030209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:18.954 [2024-10-17 16:53:55.030275] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.443 ms, result 0 00:39:18.954 true 00:39:18.954 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:39:18.954 { 00:39:18.954 "name": "ftl", 00:39:18.954 "properties": [ 00:39:18.954 { 00:39:18.954 "name": "superblock_version", 00:39:18.954 "value": 5, 00:39:18.954 "read-only": true 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "name": "base_device", 00:39:18.954 "bands": [ 00:39:18.954 { 00:39:18.954 "id": 0, 00:39:18.954 "state": "CLOSED", 00:39:18.954 "validity": 1.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 1, 00:39:18.954 "state": "CLOSED", 00:39:18.954 "validity": 1.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 2, 00:39:18.954 "state": "CLOSED", 00:39:18.954 "validity": 0.007843137254901933 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 3, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 4, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 5, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 6, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 7, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 8, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 9, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 10, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 11, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 12, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 13, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 14, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 15, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 16, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 17, 00:39:18.954 "state": "FREE", 00:39:18.954 "validity": 0.0 00:39:18.954 } 00:39:18.954 ], 00:39:18.954 "read-only": true 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "name": "cache_device", 00:39:18.954 "type": "bdev", 00:39:18.954 "chunks": [ 00:39:18.954 { 00:39:18.954 "id": 0, 00:39:18.954 "state": "INACTIVE", 00:39:18.954 "utilization": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 1, 00:39:18.954 "state": "OPEN", 00:39:18.954 "utilization": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 2, 00:39:18.954 "state": "OPEN", 00:39:18.954 "utilization": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 3, 00:39:18.954 "state": "FREE", 00:39:18.954 "utilization": 0.0 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "id": 4, 00:39:18.954 "state": "FREE", 00:39:18.954 "utilization": 0.0 00:39:18.954 } 00:39:18.954 ], 00:39:18.954 "read-only": true 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "name": "verbose_mode", 00:39:18.954 "value": true, 00:39:18.954 "unit": "", 00:39:18.954 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:39:18.954 }, 00:39:18.954 { 00:39:18.954 "name": "prep_upgrade_on_shutdown", 00:39:18.954 "value": false, 00:39:18.954 "unit": "", 00:39:18.954 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:39:18.954 } 00:39:18.954 ] 00:39:18.954 } 00:39:18.954 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:39:18.954 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:39:18.954 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:39:19.212 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:39:19.212 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:39:19.212 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:39:19.212 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:39:19.212 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:39:19.471 Validate MD5 checksum, iteration 1 00:39:19.471 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:39:19.471 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:39:19.471 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:39:19.471 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:39:19.471 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:39:19.471 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:39:19.471 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:39:19.471 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:39:19.471 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:39:19.471 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:39:19.471 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:39:19.471 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:39:19.471 16:53:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:39:19.471 [2024-10-17 16:53:55.750611] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:39:19.471 [2024-10-17 16:53:55.750969] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81076 ] 00:39:19.729 [2024-10-17 16:53:55.923605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:19.988 [2024-10-17 16:53:56.041906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:21.891  [2024-10-17T16:53:58.190Z] Copying: 713/1024 [MB] (713 MBps) [2024-10-17T16:54:00.090Z] Copying: 1024/1024 [MB] (average 699 MBps) 00:39:23.791 00:39:23.791 16:53:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:39:23.791 16:53:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:39:25.166 16:54:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:39:25.166 Validate MD5 checksum, iteration 2 00:39:25.166 16:54:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a95aaf405e9fbf048f4c437814f0898c 00:39:25.166 16:54:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a95aaf405e9fbf048f4c437814f0898c != \a\9\5\a\a\f\4\0\5\e\9\f\b\f\0\4\8\f\4\c\4\3\7\8\1\4\f\0\8\9\8\c ]] 00:39:25.166 16:54:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:39:25.166 16:54:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:39:25.166 16:54:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:39:25.166 16:54:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:39:25.166 16:54:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:39:25.166 16:54:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:39:25.166 16:54:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:39:25.166 16:54:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:39:25.166 16:54:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:39:25.166 [2024-10-17 16:54:01.386294] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:39:25.166 [2024-10-17 16:54:01.386792] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81143 ] 00:39:25.425 [2024-10-17 16:54:01.554990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:25.425 [2024-10-17 16:54:01.668433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:27.329  [2024-10-17T16:54:03.887Z] Copying: 729/1024 [MB] (729 MBps) [2024-10-17T16:54:07.175Z] Copying: 1024/1024 [MB] (average 730 MBps) 00:39:30.876 00:39:30.876 16:54:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:39:30.876 16:54:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fa8f3390c950b9342ba821f52f1247a7 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fa8f3390c950b9342ba821f52f1247a7 != \f\a\8\f\3\3\9\0\c\9\5\0\b\9\3\4\2\b\a\8\2\1\f\5\2\f\1\2\4\7\a\7 ]] 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81003 ]] 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81003 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:39:32.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81222 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81222 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81222 ']' 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:32.251 16:54:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:39:32.251 [2024-10-17 16:54:08.384205] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:39:32.251 [2024-10-17 16:54:08.384562] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81222 ] 00:39:32.251 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 81003 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:39:32.510 [2024-10-17 16:54:08.556847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:32.510 [2024-10-17 16:54:08.662646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:33.592 [2024-10-17 16:54:09.605494] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:39:33.592 [2024-10-17 16:54:09.605761] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:39:33.592 [2024-10-17 16:54:09.751576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.592 [2024-10-17 16:54:09.751740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:39:33.592 [2024-10-17 16:54:09.751865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:39:33.592 [2024-10-17 16:54:09.751904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.592 [2024-10-17 16:54:09.751992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.592 [2024-10-17 16:54:09.752028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:39:33.592 [2024-10-17 16:54:09.752059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:39:33.592 [2024-10-17 16:54:09.752150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.592 [2024-10-17 16:54:09.752217] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:39:33.592 [2024-10-17 16:54:09.753167] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:39:33.592 [2024-10-17 16:54:09.753321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.592 [2024-10-17 16:54:09.753398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:39:33.592 [2024-10-17 16:54:09.753433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.120 ms 00:39:33.592 [2024-10-17 16:54:09.753462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.592 [2024-10-17 16:54:09.754026] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:39:33.592 [2024-10-17 16:54:09.777825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.592 [2024-10-17 16:54:09.778004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:39:33.592 [2024-10-17 16:54:09.778133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.838 ms 00:39:33.592 [2024-10-17 16:54:09.778171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.592 [2024-10-17 16:54:09.791672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.592 [2024-10-17 16:54:09.791835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:39:33.592 [2024-10-17 16:54:09.791972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:39:33.592 [2024-10-17 16:54:09.792017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.592 [2024-10-17 16:54:09.792526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.592 [2024-10-17 16:54:09.792651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:39:33.592 [2024-10-17 16:54:09.792745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.403 ms 00:39:33.592 [2024-10-17 16:54:09.792781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.592 [2024-10-17 16:54:09.792865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.592 [2024-10-17 16:54:09.792933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:39:33.592 [2024-10-17 16:54:09.793006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:39:33.592 [2024-10-17 16:54:09.793036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.592 [2024-10-17 16:54:09.793086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.592 [2024-10-17 16:54:09.793119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:39:33.592 [2024-10-17 16:54:09.793148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:39:33.592 [2024-10-17 16:54:09.793176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.592 [2024-10-17 16:54:09.793223] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:39:33.592 [2024-10-17 16:54:09.797374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.592 [2024-10-17 16:54:09.797495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:39:33.592 [2024-10-17 16:54:09.797565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.163 ms 00:39:33.592 [2024-10-17 16:54:09.797599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.592 [2024-10-17 16:54:09.797653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.592 [2024-10-17 16:54:09.797727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:39:33.592 [2024-10-17 16:54:09.797759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:39:33.592 [2024-10-17 16:54:09.797840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.592 [2024-10-17 16:54:09.797910] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:39:33.592 [2024-10-17 16:54:09.798006] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:39:33.592 [2024-10-17 16:54:09.798066] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:39:33.592 [2024-10-17 16:54:09.798083] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:39:33.592 [2024-10-17 16:54:09.798174] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:39:33.592 [2024-10-17 16:54:09.798187] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:39:33.592 [2024-10-17 16:54:09.798200] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:39:33.592 [2024-10-17 16:54:09.798212] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:39:33.592 [2024-10-17 16:54:09.798225] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:39:33.592 [2024-10-17 16:54:09.798236] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:39:33.592 [2024-10-17 16:54:09.798246] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:39:33.592 [2024-10-17 16:54:09.798256] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:39:33.592 [2024-10-17 16:54:09.798265] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:39:33.592 [2024-10-17 16:54:09.798276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.592 [2024-10-17 16:54:09.798286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:39:33.592 [2024-10-17 16:54:09.798300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.370 ms 00:39:33.592 [2024-10-17 16:54:09.798309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.592 [2024-10-17 16:54:09.798390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.592 [2024-10-17 16:54:09.798401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:39:33.592 [2024-10-17 16:54:09.798411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:39:33.592 [2024-10-17 16:54:09.798420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.592 [2024-10-17 16:54:09.798507] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:39:33.592 [2024-10-17 16:54:09.798519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:39:33.592 [2024-10-17 16:54:09.798530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:39:33.592 [2024-10-17 16:54:09.798544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:33.592 [2024-10-17 16:54:09.798554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:39:33.592 [2024-10-17 16:54:09.798563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:39:33.592 [2024-10-17 16:54:09.798573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:39:33.592 [2024-10-17 16:54:09.798582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:39:33.592 [2024-10-17 16:54:09.798591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:39:33.592 [2024-10-17 16:54:09.798600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:33.592 [2024-10-17 16:54:09.798609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:39:33.592 [2024-10-17 16:54:09.798618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:39:33.592 [2024-10-17 16:54:09.798627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:33.592 [2024-10-17 16:54:09.798637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:39:33.592 [2024-10-17 16:54:09.798646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:39:33.592 [2024-10-17 16:54:09.798657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:33.592 [2024-10-17 16:54:09.798666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:39:33.592 [2024-10-17 16:54:09.798676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:39:33.592 [2024-10-17 16:54:09.798685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:33.592 [2024-10-17 16:54:09.798695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:39:33.592 [2024-10-17 16:54:09.798717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:39:33.592 [2024-10-17 16:54:09.798727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:39:33.592 [2024-10-17 16:54:09.798736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:39:33.592 [2024-10-17 16:54:09.798755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:39:33.593 [2024-10-17 16:54:09.798765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:39:33.593 [2024-10-17 16:54:09.798774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:39:33.593 [2024-10-17 16:54:09.798783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:39:33.593 [2024-10-17 16:54:09.798792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:39:33.593 [2024-10-17 16:54:09.798801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:39:33.593 [2024-10-17 16:54:09.798811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:39:33.593 [2024-10-17 16:54:09.798820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:39:33.593 [2024-10-17 16:54:09.798829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:39:33.593 [2024-10-17 16:54:09.798838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:39:33.593 [2024-10-17 16:54:09.798846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:33.593 [2024-10-17 16:54:09.798855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:39:33.593 [2024-10-17 16:54:09.798864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:39:33.593 [2024-10-17 16:54:09.798873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:33.593 [2024-10-17 16:54:09.798882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:39:33.593 [2024-10-17 16:54:09.798891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:39:33.593 [2024-10-17 16:54:09.798900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:33.593 [2024-10-17 16:54:09.798909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:39:33.593 [2024-10-17 16:54:09.798918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:39:33.593 [2024-10-17 16:54:09.798927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:33.593 [2024-10-17 16:54:09.798936] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:39:33.593 [2024-10-17 16:54:09.798945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:39:33.593 [2024-10-17 16:54:09.798956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:39:33.593 [2024-10-17 16:54:09.798966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:39:33.593 [2024-10-17 16:54:09.798976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:39:33.593 [2024-10-17 16:54:09.798985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:39:33.593 [2024-10-17 16:54:09.798994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:39:33.593 [2024-10-17 16:54:09.799004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:39:33.593 [2024-10-17 16:54:09.799012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:39:33.593 [2024-10-17 16:54:09.799022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:39:33.593 [2024-10-17 16:54:09.799033] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:39:33.593 [2024-10-17 16:54:09.799046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:33.593 [2024-10-17 16:54:09.799057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:39:33.593 [2024-10-17 16:54:09.799068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:39:33.593 [2024-10-17 16:54:09.799078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:39:33.593 [2024-10-17 16:54:09.799089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:39:33.593 [2024-10-17 16:54:09.799099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:39:33.593 [2024-10-17 16:54:09.799109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:39:33.593 [2024-10-17 16:54:09.799119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:39:33.593 [2024-10-17 16:54:09.799129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:39:33.593 [2024-10-17 16:54:09.799139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:39:33.593 [2024-10-17 16:54:09.799149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:39:33.593 [2024-10-17 16:54:09.799160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:39:33.593 [2024-10-17 16:54:09.799170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:39:33.593 [2024-10-17 16:54:09.799180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:39:33.593 [2024-10-17 16:54:09.799190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:39:33.593 [2024-10-17 16:54:09.799200] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:39:33.593 [2024-10-17 16:54:09.799211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:33.593 [2024-10-17 16:54:09.799222] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:33.593 [2024-10-17 16:54:09.799231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:39:33.593 [2024-10-17 16:54:09.799242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:39:33.593 [2024-10-17 16:54:09.799252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:39:33.593 [2024-10-17 16:54:09.799262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.593 [2024-10-17 16:54:09.799273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:39:33.593 [2024-10-17 16:54:09.799287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.811 ms 00:39:33.593 [2024-10-17 16:54:09.799297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.593 [2024-10-17 16:54:09.835119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.593 [2024-10-17 16:54:09.835157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:39:33.593 [2024-10-17 16:54:09.835169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.821 ms 00:39:33.593 [2024-10-17 16:54:09.835179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.593 [2024-10-17 16:54:09.835216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.593 [2024-10-17 16:54:09.835226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:39:33.593 [2024-10-17 16:54:09.835235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:39:33.593 [2024-10-17 16:54:09.835245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.593 [2024-10-17 16:54:09.880833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.593 [2024-10-17 16:54:09.880868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:39:33.593 [2024-10-17 16:54:09.880880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.605 ms 00:39:33.593 [2024-10-17 16:54:09.880891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.593 [2024-10-17 16:54:09.880920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.593 [2024-10-17 16:54:09.880930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:39:33.593 [2024-10-17 16:54:09.880940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:39:33.593 [2024-10-17 16:54:09.880950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.593 [2024-10-17 16:54:09.881077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.593 [2024-10-17 16:54:09.881090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:39:33.593 [2024-10-17 16:54:09.881102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:39:33.593 [2024-10-17 16:54:09.881111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.593 [2024-10-17 16:54:09.881150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.593 [2024-10-17 16:54:09.881160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:39:33.593 [2024-10-17 16:54:09.881171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:39:33.593 [2024-10-17 16:54:09.881180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.852 [2024-10-17 16:54:09.901834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.852 [2024-10-17 16:54:09.901977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:39:33.852 [2024-10-17 16:54:09.902064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.665 ms 00:39:33.852 [2024-10-17 16:54:09.902099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.852 [2024-10-17 16:54:09.902257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.852 [2024-10-17 16:54:09.902312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:39:33.852 [2024-10-17 16:54:09.902400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:39:33.852 [2024-10-17 16:54:09.902435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.852 [2024-10-17 16:54:09.937779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.852 [2024-10-17 16:54:09.937909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:39:33.852 [2024-10-17 16:54:09.938003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.356 ms 00:39:33.852 [2024-10-17 16:54:09.938039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.852 [2024-10-17 16:54:09.952058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.852 [2024-10-17 16:54:09.952175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:39:33.852 [2024-10-17 16:54:09.952258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.635 ms 00:39:33.852 [2024-10-17 16:54:09.952302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.852 [2024-10-17 16:54:10.033644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.852 [2024-10-17 16:54:10.033883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:39:33.852 [2024-10-17 16:54:10.033908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 81.387 ms 00:39:33.852 [2024-10-17 16:54:10.033927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.852 [2024-10-17 16:54:10.034139] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:39:33.852 [2024-10-17 16:54:10.034276] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:39:33.853 [2024-10-17 16:54:10.034393] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:39:33.853 [2024-10-17 16:54:10.034511] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:39:33.853 [2024-10-17 16:54:10.034523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.853 [2024-10-17 16:54:10.034534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:39:33.853 [2024-10-17 16:54:10.034545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.511 ms 00:39:33.853 [2024-10-17 16:54:10.034556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.853 [2024-10-17 16:54:10.034644] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:39:33.853 [2024-10-17 16:54:10.034661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.853 [2024-10-17 16:54:10.034671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:39:33.853 [2024-10-17 16:54:10.034682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:39:33.853 [2024-10-17 16:54:10.034696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.853 [2024-10-17 16:54:10.056321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.853 [2024-10-17 16:54:10.056451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:39:33.853 [2024-10-17 16:54:10.056477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.599 ms 00:39:33.853 [2024-10-17 16:54:10.056503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.853 [2024-10-17 16:54:10.069685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.853 [2024-10-17 16:54:10.069848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:39:33.853 [2024-10-17 16:54:10.069972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:39:33.853 [2024-10-17 16:54:10.069989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:33.853 [2024-10-17 16:54:10.070092] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:39:33.853 [2024-10-17 16:54:10.070282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:33.853 [2024-10-17 16:54:10.070293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:39:33.853 [2024-10-17 16:54:10.070307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:39:33.853 [2024-10-17 16:54:10.070317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.421 [2024-10-17 16:54:10.625499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.421 [2024-10-17 16:54:10.625559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:39:34.421 [2024-10-17 16:54:10.625577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 554.929 ms 00:39:34.421 [2024-10-17 16:54:10.625589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.421 [2024-10-17 16:54:10.630964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.421 [2024-10-17 16:54:10.631006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:39:34.421 [2024-10-17 16:54:10.631019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.866 ms 00:39:34.421 [2024-10-17 16:54:10.631030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.421 [2024-10-17 16:54:10.631409] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:39:34.421 [2024-10-17 16:54:10.631440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.421 [2024-10-17 16:54:10.631450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:39:34.421 [2024-10-17 16:54:10.631462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.378 ms 00:39:34.421 [2024-10-17 16:54:10.631473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.421 [2024-10-17 16:54:10.631503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.421 [2024-10-17 16:54:10.631515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:39:34.421 [2024-10-17 16:54:10.631526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:39:34.421 [2024-10-17 16:54:10.631536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.421 [2024-10-17 16:54:10.631574] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 562.398 ms, result 0 00:39:34.421 [2024-10-17 16:54:10.631619] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:39:34.421 [2024-10-17 16:54:10.631714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.421 [2024-10-17 16:54:10.631726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:39:34.421 [2024-10-17 16:54:10.631736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.096 ms 00:39:34.421 [2024-10-17 16:54:10.631745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.991 [2024-10-17 16:54:11.172990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.991 [2024-10-17 16:54:11.173237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:39:34.991 [2024-10-17 16:54:11.173327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 540.826 ms 00:39:34.991 [2024-10-17 16:54:11.173364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.991 [2024-10-17 16:54:11.179105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.991 [2024-10-17 16:54:11.179245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:39:34.991 [2024-10-17 16:54:11.179325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.122 ms 00:39:34.991 [2024-10-17 16:54:11.179361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.991 [2024-10-17 16:54:11.179980] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:39:34.991 [2024-10-17 16:54:11.180128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.991 [2024-10-17 16:54:11.180169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:39:34.991 [2024-10-17 16:54:11.180242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.711 ms 00:39:34.991 [2024-10-17 16:54:11.180276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.991 [2024-10-17 16:54:11.180335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.991 [2024-10-17 16:54:11.180510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:39:34.991 [2024-10-17 16:54:11.180555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:39:34.991 [2024-10-17 16:54:11.180587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.991 [2024-10-17 16:54:11.180672] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 549.938 ms, result 0 00:39:34.991 [2024-10-17 16:54:11.180785] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:39:34.991 [2024-10-17 16:54:11.181012] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:39:34.991 [2024-10-17 16:54:11.181067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.991 [2024-10-17 16:54:11.181097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:39:34.991 [2024-10-17 16:54:11.181129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1112.804 ms 00:39:34.991 [2024-10-17 16:54:11.181235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.991 [2024-10-17 16:54:11.181351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.991 [2024-10-17 16:54:11.181384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:39:34.991 [2024-10-17 16:54:11.181415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:39:34.991 [2024-10-17 16:54:11.181451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.991 [2024-10-17 16:54:11.192698] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:39:34.991 [2024-10-17 16:54:11.192955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.991 [2024-10-17 16:54:11.193001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:39:34.991 [2024-10-17 16:54:11.193086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.485 ms 00:39:34.991 [2024-10-17 16:54:11.193121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.991 [2024-10-17 16:54:11.193733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.991 [2024-10-17 16:54:11.193754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:39:34.991 [2024-10-17 16:54:11.193765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:39:34.991 [2024-10-17 16:54:11.193779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.991 [2024-10-17 16:54:11.195777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.991 [2024-10-17 16:54:11.195799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:39:34.991 [2024-10-17 16:54:11.195811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.982 ms 00:39:34.991 [2024-10-17 16:54:11.195820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.991 [2024-10-17 16:54:11.195874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.991 [2024-10-17 16:54:11.195896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:39:34.991 [2024-10-17 16:54:11.195906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:39:34.991 [2024-10-17 16:54:11.195916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.991 [2024-10-17 16:54:11.196015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.991 [2024-10-17 16:54:11.196026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:39:34.991 [2024-10-17 16:54:11.196036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:39:34.991 [2024-10-17 16:54:11.196045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.992 [2024-10-17 16:54:11.196066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.992 [2024-10-17 16:54:11.196076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:39:34.992 [2024-10-17 16:54:11.196085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:39:34.992 [2024-10-17 16:54:11.196095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.992 [2024-10-17 16:54:11.196124] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:39:34.992 [2024-10-17 16:54:11.196136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.992 [2024-10-17 16:54:11.196149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:39:34.992 [2024-10-17 16:54:11.196158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:39:34.992 [2024-10-17 16:54:11.196167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.992 [2024-10-17 16:54:11.196215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:34.992 [2024-10-17 16:54:11.196225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:39:34.992 [2024-10-17 16:54:11.196235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:39:34.992 [2024-10-17 16:54:11.196244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:34.992 [2024-10-17 16:54:11.197306] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1447.495 ms, result 0 00:39:34.992 [2024-10-17 16:54:11.209762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:34.992 [2024-10-17 16:54:11.225725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:39:34.992 [2024-10-17 16:54:11.235147] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:39:34.992 Validate MD5 checksum, iteration 1 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:39:34.992 16:54:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:39:35.251 [2024-10-17 16:54:11.371695] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:39:35.251 [2024-10-17 16:54:11.372005] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81257 ] 00:39:35.251 [2024-10-17 16:54:11.536065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:35.510 [2024-10-17 16:54:11.649222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:37.414  [2024-10-17T16:54:13.971Z] Copying: 721/1024 [MB] (721 MBps) [2024-10-17T16:54:15.349Z] Copying: 1024/1024 [MB] (average 716 MBps) 00:39:39.050 00:39:39.050 16:54:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:39:39.050 16:54:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:39:40.949 16:54:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:39:40.950 Validate MD5 checksum, iteration 2 00:39:40.950 16:54:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a95aaf405e9fbf048f4c437814f0898c 00:39:40.950 16:54:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a95aaf405e9fbf048f4c437814f0898c != \a\9\5\a\a\f\4\0\5\e\9\f\b\f\0\4\8\f\4\c\4\3\7\8\1\4\f\0\8\9\8\c ]] 00:39:40.950 16:54:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:39:40.950 16:54:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:39:40.950 16:54:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:39:40.950 16:54:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:39:40.950 16:54:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:39:40.950 16:54:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:39:40.950 16:54:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:39:40.950 16:54:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:39:40.950 16:54:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:39:40.950 [2024-10-17 16:54:16.891920] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:39:40.950 [2024-10-17 16:54:16.892801] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81318 ] 00:39:40.950 [2024-10-17 16:54:17.062216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.950 [2024-10-17 16:54:17.176229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:42.852  [2024-10-17T16:54:19.410Z] Copying: 723/1024 [MB] (723 MBps) [2024-10-17T16:54:20.786Z] Copying: 1024/1024 [MB] (average 719 MBps) 00:39:44.487 00:39:44.487 16:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:39:44.487 16:54:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:39:45.863 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:39:45.863 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fa8f3390c950b9342ba821f52f1247a7 00:39:45.863 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fa8f3390c950b9342ba821f52f1247a7 != \f\a\8\f\3\3\9\0\c\9\5\0\b\9\3\4\2\b\a\8\2\1\f\5\2\f\1\2\4\7\a\7 ]] 00:39:45.863 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:39:45.863 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:39:45.863 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:39:45.863 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:39:45.863 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:39:45.863 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81222 ]] 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81222 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81222 ']' 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 81222 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81222 00:39:46.122 killing process with pid 81222 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81222' 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 81222 00:39:46.122 16:54:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 81222 00:39:47.527 [2024-10-17 16:54:23.363680] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:39:47.527 [2024-10-17 16:54:23.384153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.527 [2024-10-17 16:54:23.384302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:39:47.527 [2024-10-17 16:54:23.384447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:39:47.527 [2024-10-17 16:54:23.384486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.527 [2024-10-17 16:54:23.384540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:39:47.527 [2024-10-17 16:54:23.388491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.527 [2024-10-17 16:54:23.388649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:39:47.527 [2024-10-17 16:54:23.388790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.890 ms 00:39:47.527 [2024-10-17 16:54:23.388827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.527 [2024-10-17 16:54:23.389061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.527 [2024-10-17 16:54:23.389123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:39:47.527 [2024-10-17 16:54:23.389217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:39:47.527 [2024-10-17 16:54:23.389247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.527 [2024-10-17 16:54:23.390348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.527 [2024-10-17 16:54:23.390482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:39:47.527 [2024-10-17 16:54:23.390564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.065 ms 00:39:47.527 [2024-10-17 16:54:23.390579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.527 [2024-10-17 16:54:23.391519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.527 [2024-10-17 16:54:23.391552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:39:47.527 [2024-10-17 16:54:23.391565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.900 ms 00:39:47.527 [2024-10-17 16:54:23.391574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.527 [2024-10-17 16:54:23.406524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.527 [2024-10-17 16:54:23.406649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:39:47.527 [2024-10-17 16:54:23.406814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.937 ms 00:39:47.527 [2024-10-17 16:54:23.406853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.527 [2024-10-17 16:54:23.414710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.527 [2024-10-17 16:54:23.414832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:39:47.527 [2024-10-17 16:54:23.414958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.804 ms 00:39:47.527 [2024-10-17 16:54:23.414994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.527 [2024-10-17 16:54:23.415104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.527 [2024-10-17 16:54:23.415139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:39:47.527 [2024-10-17 16:54:23.415225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:39:47.527 [2024-10-17 16:54:23.415260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.527 [2024-10-17 16:54:23.429589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.527 [2024-10-17 16:54:23.429733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:39:47.527 [2024-10-17 16:54:23.429873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.310 ms 00:39:47.527 [2024-10-17 16:54:23.429909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.527 [2024-10-17 16:54:23.444302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.527 [2024-10-17 16:54:23.444419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:39:47.527 [2024-10-17 16:54:23.444501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.361 ms 00:39:47.527 [2024-10-17 16:54:23.444534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.527 [2024-10-17 16:54:23.458402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.527 [2024-10-17 16:54:23.458519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:39:47.527 [2024-10-17 16:54:23.458537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.773 ms 00:39:47.527 [2024-10-17 16:54:23.458563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.527 [2024-10-17 16:54:23.472857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.527 [2024-10-17 16:54:23.472891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:39:47.527 [2024-10-17 16:54:23.472903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.246 ms 00:39:47.527 [2024-10-17 16:54:23.472912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.527 [2024-10-17 16:54:23.472946] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:39:47.527 [2024-10-17 16:54:23.472967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:39:47.527 [2024-10-17 16:54:23.472979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:39:47.527 [2024-10-17 16:54:23.472990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:39:47.527 [2024-10-17 16:54:23.473001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:47.527 [2024-10-17 16:54:23.473012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:47.527 [2024-10-17 16:54:23.473023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:47.527 [2024-10-17 16:54:23.473033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:47.528 [2024-10-17 16:54:23.473043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:47.528 [2024-10-17 16:54:23.473053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:47.528 [2024-10-17 16:54:23.473063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:47.528 [2024-10-17 16:54:23.473073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:47.528 [2024-10-17 16:54:23.473083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:47.528 [2024-10-17 16:54:23.473093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:47.528 [2024-10-17 16:54:23.473104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:47.528 [2024-10-17 16:54:23.473114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:47.528 [2024-10-17 16:54:23.473124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:47.528 [2024-10-17 16:54:23.473134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:47.528 [2024-10-17 16:54:23.473144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:47.528 [2024-10-17 16:54:23.473156] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:39:47.528 [2024-10-17 16:54:23.473165] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 671ebd71-148e-486c-a3db-a0a82fbfcac9 00:39:47.528 [2024-10-17 16:54:23.473175] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:39:47.528 [2024-10-17 16:54:23.473185] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:39:47.528 [2024-10-17 16:54:23.473194] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:39:47.528 [2024-10-17 16:54:23.473204] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:39:47.528 [2024-10-17 16:54:23.473213] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:39:47.528 [2024-10-17 16:54:23.473224] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:39:47.528 [2024-10-17 16:54:23.473233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:39:47.528 [2024-10-17 16:54:23.473242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:39:47.528 [2024-10-17 16:54:23.473252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:39:47.528 [2024-10-17 16:54:23.473262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.528 [2024-10-17 16:54:23.473272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:39:47.528 [2024-10-17 16:54:23.473287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.317 ms 00:39:47.528 [2024-10-17 16:54:23.473297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.492852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.528 [2024-10-17 16:54:23.492882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:39:47.528 [2024-10-17 16:54:23.492895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.557 ms 00:39:47.528 [2024-10-17 16:54:23.492905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.493426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:39:47.528 [2024-10-17 16:54:23.493442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:39:47.528 [2024-10-17 16:54:23.493452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.500 ms 00:39:47.528 [2024-10-17 16:54:23.493461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.557230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:47.528 [2024-10-17 16:54:23.557265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:39:47.528 [2024-10-17 16:54:23.557278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:47.528 [2024-10-17 16:54:23.557289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.557320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:47.528 [2024-10-17 16:54:23.557337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:39:47.528 [2024-10-17 16:54:23.557347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:47.528 [2024-10-17 16:54:23.557358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.557430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:47.528 [2024-10-17 16:54:23.557444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:39:47.528 [2024-10-17 16:54:23.557454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:47.528 [2024-10-17 16:54:23.557464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.557482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:47.528 [2024-10-17 16:54:23.557492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:39:47.528 [2024-10-17 16:54:23.557507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:47.528 [2024-10-17 16:54:23.557517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.677476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:47.528 [2024-10-17 16:54:23.677520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:39:47.528 [2024-10-17 16:54:23.677534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:47.528 [2024-10-17 16:54:23.677560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.773450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:47.528 [2024-10-17 16:54:23.773497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:39:47.528 [2024-10-17 16:54:23.773516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:47.528 [2024-10-17 16:54:23.773527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.773631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:47.528 [2024-10-17 16:54:23.773643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:39:47.528 [2024-10-17 16:54:23.773654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:47.528 [2024-10-17 16:54:23.773664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.773727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:47.528 [2024-10-17 16:54:23.773741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:39:47.528 [2024-10-17 16:54:23.773752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:47.528 [2024-10-17 16:54:23.773776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.773875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:47.528 [2024-10-17 16:54:23.773892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:39:47.528 [2024-10-17 16:54:23.773902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:47.528 [2024-10-17 16:54:23.773912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.773952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:47.528 [2024-10-17 16:54:23.773964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:39:47.528 [2024-10-17 16:54:23.773975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:47.528 [2024-10-17 16:54:23.773984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.774026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:47.528 [2024-10-17 16:54:23.774036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:39:47.528 [2024-10-17 16:54:23.774046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:47.528 [2024-10-17 16:54:23.774056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.774097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:39:47.528 [2024-10-17 16:54:23.774107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:39:47.528 [2024-10-17 16:54:23.774117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:39:47.528 [2024-10-17 16:54:23.774130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:39:47.528 [2024-10-17 16:54:23.774261] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 390.712 ms, result 0 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:48.906 Remove shared memory files 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81003 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:39:48.906 ************************************ 00:39:48.906 END TEST ftl_upgrade_shutdown 00:39:48.906 ************************************ 00:39:48.906 00:39:48.906 real 1m24.616s 00:39:48.906 user 1m55.785s 00:39:48.906 sys 0m22.943s 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:48.906 16:54:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:39:48.906 16:54:25 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:39:48.906 16:54:25 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:39:48.906 16:54:25 ftl -- ftl/ftl.sh@14 -- # killprocess 73972 00:39:48.906 16:54:25 ftl -- common/autotest_common.sh@950 -- # '[' -z 73972 ']' 00:39:48.906 16:54:25 ftl -- common/autotest_common.sh@954 -- # kill -0 73972 00:39:48.906 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (73972) - No such process 00:39:48.906 Process with pid 73972 is not found 00:39:48.906 16:54:25 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 73972 is not found' 00:39:48.906 16:54:25 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:39:48.906 16:54:25 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81434 00:39:48.907 16:54:25 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:48.907 16:54:25 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81434 00:39:48.907 16:54:25 ftl -- common/autotest_common.sh@831 -- # '[' -z 81434 ']' 00:39:48.907 16:54:25 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:48.907 16:54:25 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:48.907 16:54:25 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:48.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:48.907 16:54:25 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:48.907 16:54:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:39:48.907 [2024-10-17 16:54:25.168826] Starting SPDK v25.01-pre git sha1 c1dd46fc6 / DPDK 24.03.0 initialization... 00:39:48.907 [2024-10-17 16:54:25.168942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81434 ] 00:39:49.165 [2024-10-17 16:54:25.338669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:49.165 [2024-10-17 16:54:25.445455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:50.100 16:54:26 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:50.100 16:54:26 ftl -- common/autotest_common.sh@864 -- # return 0 00:39:50.100 16:54:26 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:39:50.358 nvme0n1 00:39:50.358 16:54:26 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:39:50.358 16:54:26 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:39:50.358 16:54:26 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:39:50.617 16:54:26 ftl -- ftl/common.sh@28 -- # stores=825d1887-18ef-4daa-b4cb-58dd317f9e70 00:39:50.617 16:54:26 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:39:50.617 16:54:26 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 825d1887-18ef-4daa-b4cb-58dd317f9e70 00:39:50.876 16:54:26 ftl -- ftl/ftl.sh@23 -- # killprocess 81434 00:39:50.876 16:54:26 ftl -- common/autotest_common.sh@950 -- # '[' -z 81434 ']' 00:39:50.876 16:54:26 ftl -- common/autotest_common.sh@954 -- # kill -0 81434 00:39:50.876 16:54:26 ftl -- common/autotest_common.sh@955 -- # uname 00:39:50.876 16:54:26 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:50.876 16:54:26 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81434 00:39:50.876 killing process with pid 81434 00:39:50.876 16:54:26 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:50.876 16:54:26 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:50.876 16:54:26 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81434' 00:39:50.876 16:54:26 ftl -- common/autotest_common.sh@969 -- # kill 81434 00:39:50.876 16:54:26 ftl -- common/autotest_common.sh@974 -- # wait 81434 00:39:53.419 16:54:29 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:53.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:39:53.419 Waiting for block devices as requested 00:39:53.679 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:39:53.679 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:53.679 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:39:53.938 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:39:59.233 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:39:59.233 16:54:35 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:39:59.233 16:54:35 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:39:59.233 Remove shared memory files 00:39:59.233 16:54:35 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:39:59.233 16:54:35 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:39:59.233 16:54:35 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:39:59.233 16:54:35 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:39:59.233 16:54:35 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:39:59.233 ************************************ 00:39:59.233 END TEST ftl 00:39:59.233 ************************************ 00:39:59.233 00:39:59.233 real 10m55.864s 00:39:59.233 user 13m26.583s 00:39:59.233 sys 1m30.419s 00:39:59.233 16:54:35 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:59.233 16:54:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:39:59.233 16:54:35 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:59.233 16:54:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:59.233 16:54:35 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:39:59.233 16:54:35 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:59.233 16:54:35 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:39:59.233 16:54:35 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:59.233 16:54:35 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:59.233 16:54:35 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:39:59.233 16:54:35 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:39:59.233 16:54:35 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:39:59.233 16:54:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:59.233 16:54:35 -- common/autotest_common.sh@10 -- # set +x 00:39:59.233 16:54:35 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:39:59.233 16:54:35 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:39:59.233 16:54:35 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:39:59.233 16:54:35 -- common/autotest_common.sh@10 -- # set +x 00:40:01.138 INFO: APP EXITING 00:40:01.138 INFO: killing all VMs 00:40:01.138 INFO: killing vhost app 00:40:01.138 INFO: EXIT DONE 00:40:01.707 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:02.274 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:40:02.274 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:40:02.274 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:40:02.274 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:40:02.843 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:40:03.102 Cleaning 00:40:03.103 Removing: /var/run/dpdk/spdk0/config 00:40:03.103 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:40:03.103 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:40:03.103 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:40:03.103 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:40:03.103 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:40:03.103 Removing: /var/run/dpdk/spdk0/hugepage_info 00:40:03.103 Removing: /var/run/dpdk/spdk0 00:40:03.103 Removing: /var/run/dpdk/spdk_pid57506 00:40:03.103 Removing: /var/run/dpdk/spdk_pid57752 00:40:03.103 Removing: /var/run/dpdk/spdk_pid57981 00:40:03.103 Removing: /var/run/dpdk/spdk_pid58096 00:40:03.103 Removing: /var/run/dpdk/spdk_pid58142 00:40:03.103 Removing: /var/run/dpdk/spdk_pid58280 00:40:03.103 Removing: /var/run/dpdk/spdk_pid58298 00:40:03.103 Removing: /var/run/dpdk/spdk_pid58508 00:40:03.103 Removing: /var/run/dpdk/spdk_pid58624 00:40:03.103 Removing: /var/run/dpdk/spdk_pid58732 00:40:03.103 Removing: /var/run/dpdk/spdk_pid58855 00:40:03.103 Removing: /var/run/dpdk/spdk_pid58968 00:40:03.103 Removing: /var/run/dpdk/spdk_pid59007 00:40:03.103 Removing: /var/run/dpdk/spdk_pid59044 00:40:03.103 Removing: /var/run/dpdk/spdk_pid59120 00:40:03.103 Removing: /var/run/dpdk/spdk_pid59237 00:40:03.103 Removing: /var/run/dpdk/spdk_pid59696 00:40:03.103 Removing: /var/run/dpdk/spdk_pid59771 00:40:03.103 Removing: /var/run/dpdk/spdk_pid59847 00:40:03.103 Removing: /var/run/dpdk/spdk_pid59863 00:40:03.363 Removing: /var/run/dpdk/spdk_pid60021 00:40:03.363 Removing: /var/run/dpdk/spdk_pid60037 00:40:03.363 Removing: /var/run/dpdk/spdk_pid60191 00:40:03.363 Removing: /var/run/dpdk/spdk_pid60212 00:40:03.363 Removing: /var/run/dpdk/spdk_pid60282 00:40:03.363 Removing: /var/run/dpdk/spdk_pid60300 00:40:03.363 Removing: /var/run/dpdk/spdk_pid60364 00:40:03.363 Removing: /var/run/dpdk/spdk_pid60387 00:40:03.363 Removing: /var/run/dpdk/spdk_pid60587 00:40:03.363 Removing: /var/run/dpdk/spdk_pid60619 00:40:03.363 Removing: /var/run/dpdk/spdk_pid60708 00:40:03.363 Removing: /var/run/dpdk/spdk_pid60902 00:40:03.363 Removing: /var/run/dpdk/spdk_pid60997 00:40:03.363 Removing: /var/run/dpdk/spdk_pid61039 00:40:03.363 Removing: /var/run/dpdk/spdk_pid61490 00:40:03.363 Removing: /var/run/dpdk/spdk_pid61599 00:40:03.363 Removing: /var/run/dpdk/spdk_pid61714 00:40:03.363 Removing: /var/run/dpdk/spdk_pid61772 00:40:03.363 Removing: /var/run/dpdk/spdk_pid61798 00:40:03.363 Removing: /var/run/dpdk/spdk_pid61882 00:40:03.363 Removing: /var/run/dpdk/spdk_pid62528 00:40:03.363 Removing: /var/run/dpdk/spdk_pid62570 00:40:03.363 Removing: /var/run/dpdk/spdk_pid63079 00:40:03.363 Removing: /var/run/dpdk/spdk_pid63177 00:40:03.363 Removing: /var/run/dpdk/spdk_pid63297 00:40:03.363 Removing: /var/run/dpdk/spdk_pid63350 00:40:03.363 Removing: /var/run/dpdk/spdk_pid63376 00:40:03.363 Removing: /var/run/dpdk/spdk_pid63401 00:40:03.363 Removing: /var/run/dpdk/spdk_pid65296 00:40:03.363 Removing: /var/run/dpdk/spdk_pid65453 00:40:03.363 Removing: /var/run/dpdk/spdk_pid65457 00:40:03.363 Removing: /var/run/dpdk/spdk_pid65469 00:40:03.363 Removing: /var/run/dpdk/spdk_pid65521 00:40:03.363 Removing: /var/run/dpdk/spdk_pid65525 00:40:03.363 Removing: /var/run/dpdk/spdk_pid65537 00:40:03.363 Removing: /var/run/dpdk/spdk_pid65582 00:40:03.363 Removing: /var/run/dpdk/spdk_pid65586 00:40:03.363 Removing: /var/run/dpdk/spdk_pid65598 00:40:03.363 Removing: /var/run/dpdk/spdk_pid65649 00:40:03.363 Removing: /var/run/dpdk/spdk_pid65653 00:40:03.363 Removing: /var/run/dpdk/spdk_pid65665 00:40:03.363 Removing: /var/run/dpdk/spdk_pid67068 00:40:03.363 Removing: /var/run/dpdk/spdk_pid67182 00:40:03.363 Removing: /var/run/dpdk/spdk_pid68623 00:40:03.363 Removing: /var/run/dpdk/spdk_pid69994 00:40:03.363 Removing: /var/run/dpdk/spdk_pid70114 00:40:03.363 Removing: /var/run/dpdk/spdk_pid70229 00:40:03.363 Removing: /var/run/dpdk/spdk_pid70338 00:40:03.363 Removing: /var/run/dpdk/spdk_pid70475 00:40:03.363 Removing: /var/run/dpdk/spdk_pid70555 00:40:03.363 Removing: /var/run/dpdk/spdk_pid70709 00:40:03.363 Removing: /var/run/dpdk/spdk_pid71087 00:40:03.363 Removing: /var/run/dpdk/spdk_pid71129 00:40:03.363 Removing: /var/run/dpdk/spdk_pid71606 00:40:03.363 Removing: /var/run/dpdk/spdk_pid71794 00:40:03.363 Removing: /var/run/dpdk/spdk_pid71902 00:40:03.363 Removing: /var/run/dpdk/spdk_pid72017 00:40:03.623 Removing: /var/run/dpdk/spdk_pid72076 00:40:03.623 Removing: /var/run/dpdk/spdk_pid72102 00:40:03.623 Removing: /var/run/dpdk/spdk_pid72413 00:40:03.623 Removing: /var/run/dpdk/spdk_pid72481 00:40:03.623 Removing: /var/run/dpdk/spdk_pid72575 00:40:03.623 Removing: /var/run/dpdk/spdk_pid73006 00:40:03.623 Removing: /var/run/dpdk/spdk_pid73162 00:40:03.623 Removing: /var/run/dpdk/spdk_pid73972 00:40:03.623 Removing: /var/run/dpdk/spdk_pid74122 00:40:03.623 Removing: /var/run/dpdk/spdk_pid74336 00:40:03.623 Removing: /var/run/dpdk/spdk_pid74444 00:40:03.623 Removing: /var/run/dpdk/spdk_pid74758 00:40:03.623 Removing: /var/run/dpdk/spdk_pid75017 00:40:03.623 Removing: /var/run/dpdk/spdk_pid75375 00:40:03.623 Removing: /var/run/dpdk/spdk_pid75587 00:40:03.623 Removing: /var/run/dpdk/spdk_pid75717 00:40:03.623 Removing: /var/run/dpdk/spdk_pid75786 00:40:03.623 Removing: /var/run/dpdk/spdk_pid75913 00:40:03.623 Removing: /var/run/dpdk/spdk_pid75949 00:40:03.623 Removing: /var/run/dpdk/spdk_pid76016 00:40:03.623 Removing: /var/run/dpdk/spdk_pid76209 00:40:03.623 Removing: /var/run/dpdk/spdk_pid76462 00:40:03.623 Removing: /var/run/dpdk/spdk_pid76858 00:40:03.623 Removing: /var/run/dpdk/spdk_pid77266 00:40:03.623 Removing: /var/run/dpdk/spdk_pid77691 00:40:03.623 Removing: /var/run/dpdk/spdk_pid78193 00:40:03.623 Removing: /var/run/dpdk/spdk_pid78341 00:40:03.623 Removing: /var/run/dpdk/spdk_pid78434 00:40:03.623 Removing: /var/run/dpdk/spdk_pid79041 00:40:03.623 Removing: /var/run/dpdk/spdk_pid79116 00:40:03.623 Removing: /var/run/dpdk/spdk_pid79552 00:40:03.623 Removing: /var/run/dpdk/spdk_pid79916 00:40:03.623 Removing: /var/run/dpdk/spdk_pid80436 00:40:03.623 Removing: /var/run/dpdk/spdk_pid80558 00:40:03.623 Removing: /var/run/dpdk/spdk_pid80617 00:40:03.623 Removing: /var/run/dpdk/spdk_pid80681 00:40:03.623 Removing: /var/run/dpdk/spdk_pid80739 00:40:03.623 Removing: /var/run/dpdk/spdk_pid80803 00:40:03.623 Removing: /var/run/dpdk/spdk_pid81003 00:40:03.623 Removing: /var/run/dpdk/spdk_pid81076 00:40:03.623 Removing: /var/run/dpdk/spdk_pid81143 00:40:03.623 Removing: /var/run/dpdk/spdk_pid81222 00:40:03.623 Removing: /var/run/dpdk/spdk_pid81257 00:40:03.623 Removing: /var/run/dpdk/spdk_pid81318 00:40:03.623 Removing: /var/run/dpdk/spdk_pid81434 00:40:03.623 Clean 00:40:03.882 16:54:39 -- common/autotest_common.sh@1451 -- # return 0 00:40:03.882 16:54:39 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:40:03.882 16:54:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:03.882 16:54:39 -- common/autotest_common.sh@10 -- # set +x 00:40:03.882 16:54:39 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:40:03.882 16:54:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:40:03.882 16:54:39 -- common/autotest_common.sh@10 -- # set +x 00:40:03.882 16:54:40 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:40:03.882 16:54:40 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:40:03.882 16:54:40 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:40:03.882 16:54:40 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:40:03.882 16:54:40 -- spdk/autotest.sh@394 -- # hostname 00:40:03.882 16:54:40 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:40:04.140 geninfo: WARNING: invalid characters removed from testname! 00:40:30.713 16:55:03 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:30.713 16:55:06 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:33.237 16:55:09 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:35.136 16:55:11 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:37.038 16:55:13 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:39.571 16:55:15 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:40:41.475 16:55:17 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:41.475 16:55:17 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:40:41.475 16:55:17 -- common/autotest_common.sh@1691 -- $ lcov --version 00:40:41.475 16:55:17 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:40:41.475 16:55:17 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:40:41.475 16:55:17 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:40:41.475 16:55:17 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:40:41.475 16:55:17 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:40:41.475 16:55:17 -- scripts/common.sh@336 -- $ IFS=.-: 00:40:41.475 16:55:17 -- scripts/common.sh@336 -- $ read -ra ver1 00:40:41.475 16:55:17 -- scripts/common.sh@337 -- $ IFS=.-: 00:40:41.475 16:55:17 -- scripts/common.sh@337 -- $ read -ra ver2 00:40:41.475 16:55:17 -- scripts/common.sh@338 -- $ local 'op=<' 00:40:41.475 16:55:17 -- scripts/common.sh@340 -- $ ver1_l=2 00:40:41.475 16:55:17 -- scripts/common.sh@341 -- $ ver2_l=1 00:40:41.475 16:55:17 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:40:41.475 16:55:17 -- scripts/common.sh@344 -- $ case "$op" in 00:40:41.475 16:55:17 -- scripts/common.sh@345 -- $ : 1 00:40:41.475 16:55:17 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:40:41.475 16:55:17 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:41.475 16:55:17 -- scripts/common.sh@365 -- $ decimal 1 00:40:41.475 16:55:17 -- scripts/common.sh@353 -- $ local d=1 00:40:41.475 16:55:17 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:40:41.475 16:55:17 -- scripts/common.sh@355 -- $ echo 1 00:40:41.475 16:55:17 -- scripts/common.sh@365 -- $ ver1[v]=1 00:40:41.475 16:55:17 -- scripts/common.sh@366 -- $ decimal 2 00:40:41.475 16:55:17 -- scripts/common.sh@353 -- $ local d=2 00:40:41.475 16:55:17 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:40:41.475 16:55:17 -- scripts/common.sh@355 -- $ echo 2 00:40:41.475 16:55:17 -- scripts/common.sh@366 -- $ ver2[v]=2 00:40:41.475 16:55:17 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:40:41.475 16:55:17 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:40:41.475 16:55:17 -- scripts/common.sh@368 -- $ return 0 00:40:41.475 16:55:17 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:41.475 16:55:17 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:40:41.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.475 --rc genhtml_branch_coverage=1 00:40:41.475 --rc genhtml_function_coverage=1 00:40:41.475 --rc genhtml_legend=1 00:40:41.475 --rc geninfo_all_blocks=1 00:40:41.475 --rc geninfo_unexecuted_blocks=1 00:40:41.475 00:40:41.475 ' 00:40:41.475 16:55:17 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:40:41.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.475 --rc genhtml_branch_coverage=1 00:40:41.475 --rc genhtml_function_coverage=1 00:40:41.475 --rc genhtml_legend=1 00:40:41.475 --rc geninfo_all_blocks=1 00:40:41.475 --rc geninfo_unexecuted_blocks=1 00:40:41.475 00:40:41.475 ' 00:40:41.475 16:55:17 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:40:41.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.475 --rc genhtml_branch_coverage=1 00:40:41.475 --rc genhtml_function_coverage=1 00:40:41.475 --rc genhtml_legend=1 00:40:41.475 --rc geninfo_all_blocks=1 00:40:41.475 --rc geninfo_unexecuted_blocks=1 00:40:41.475 00:40:41.475 ' 00:40:41.475 16:55:17 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:40:41.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.475 --rc genhtml_branch_coverage=1 00:40:41.475 --rc genhtml_function_coverage=1 00:40:41.475 --rc genhtml_legend=1 00:40:41.475 --rc geninfo_all_blocks=1 00:40:41.475 --rc geninfo_unexecuted_blocks=1 00:40:41.475 00:40:41.475 ' 00:40:41.475 16:55:17 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:41.475 16:55:17 -- scripts/common.sh@15 -- $ shopt -s extglob 00:40:41.475 16:55:17 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:40:41.475 16:55:17 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:41.475 16:55:17 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:41.475 16:55:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.475 16:55:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.475 16:55:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.475 16:55:17 -- paths/export.sh@5 -- $ export PATH 00:40:41.475 16:55:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.475 16:55:17 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:40:41.475 16:55:17 -- common/autobuild_common.sh@486 -- $ date +%s 00:40:41.475 16:55:17 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729184117.XXXXXX 00:40:41.475 16:55:17 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729184117.WPyai6 00:40:41.475 16:55:17 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:40:41.475 16:55:17 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:40:41.475 16:55:17 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:40:41.475 16:55:17 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:40:41.475 16:55:17 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:40:41.475 16:55:17 -- common/autobuild_common.sh@502 -- $ get_config_params 00:40:41.475 16:55:17 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:40:41.475 16:55:17 -- common/autotest_common.sh@10 -- $ set +x 00:40:41.475 16:55:17 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:40:41.475 16:55:17 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:40:41.475 16:55:17 -- pm/common@17 -- $ local monitor 00:40:41.475 16:55:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:41.475 16:55:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:41.475 16:55:17 -- pm/common@25 -- $ sleep 1 00:40:41.475 16:55:17 -- pm/common@21 -- $ date +%s 00:40:41.475 16:55:17 -- pm/common@21 -- $ date +%s 00:40:41.475 16:55:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729184117 00:40:41.475 16:55:17 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1729184117 00:40:41.475 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729184117_collect-cpu-load.pm.log 00:40:41.475 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1729184117_collect-vmstat.pm.log 00:40:42.410 16:55:18 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:40:42.410 16:55:18 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:40:42.410 16:55:18 -- spdk/autopackage.sh@14 -- $ timing_finish 00:40:42.410 16:55:18 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:42.410 16:55:18 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:42.410 16:55:18 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:40:42.410 16:55:18 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:40:42.410 16:55:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:40:42.410 16:55:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:40:42.410 16:55:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:42.410 16:55:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:40:42.410 16:55:18 -- pm/common@44 -- $ pid=83158 00:40:42.410 16:55:18 -- pm/common@50 -- $ kill -TERM 83158 00:40:42.410 16:55:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:40:42.410 16:55:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:40:42.410 16:55:18 -- pm/common@44 -- $ pid=83160 00:40:42.410 16:55:18 -- pm/common@50 -- $ kill -TERM 83160 00:40:42.410 + [[ -n 5242 ]] 00:40:42.410 + sudo kill 5242 00:40:42.676 [Pipeline] } 00:40:42.690 [Pipeline] // timeout 00:40:42.696 [Pipeline] } 00:40:42.714 [Pipeline] // stage 00:40:42.718 [Pipeline] } 00:40:42.731 [Pipeline] // catchError 00:40:42.739 [Pipeline] stage 00:40:42.805 [Pipeline] { (Stop VM) 00:40:42.815 [Pipeline] sh 00:40:43.091 + vagrant halt 00:40:45.623 ==> default: Halting domain... 00:40:52.247 [Pipeline] sh 00:40:52.526 + vagrant destroy -f 00:40:55.060 ==> default: Removing domain... 00:40:55.637 [Pipeline] sh 00:40:55.916 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 00:40:55.924 [Pipeline] } 00:40:55.937 [Pipeline] // stage 00:40:55.942 [Pipeline] } 00:40:55.955 [Pipeline] // dir 00:40:55.959 [Pipeline] } 00:40:55.972 [Pipeline] // wrap 00:40:55.977 [Pipeline] } 00:40:55.988 [Pipeline] // catchError 00:40:55.998 [Pipeline] stage 00:40:55.999 [Pipeline] { (Epilogue) 00:40:56.011 [Pipeline] sh 00:40:56.292 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:01.604 [Pipeline] catchError 00:41:01.606 [Pipeline] { 00:41:01.617 [Pipeline] sh 00:41:01.899 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:01.899 Artifacts sizes are good 00:41:01.909 [Pipeline] } 00:41:01.922 [Pipeline] // catchError 00:41:01.933 [Pipeline] archiveArtifacts 00:41:01.940 Archiving artifacts 00:41:02.060 [Pipeline] cleanWs 00:41:02.070 [WS-CLEANUP] Deleting project workspace... 00:41:02.070 [WS-CLEANUP] Deferred wipeout is used... 00:41:02.076 [WS-CLEANUP] done 00:41:02.078 [Pipeline] } 00:41:02.094 [Pipeline] // stage 00:41:02.099 [Pipeline] } 00:41:02.112 [Pipeline] // node 00:41:02.116 [Pipeline] End of Pipeline 00:41:02.145 Finished: SUCCESS