00:00:00.000 Started by upstream project "autotest-per-patch" build number 132079 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.053 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.054 The recommended git tool is: git 00:00:00.054 using credential 00000000-0000-0000-0000-000000000002 00:00:00.056 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.093 Fetching changes from the remote Git repository 00:00:00.095 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.157 Using shallow fetch with depth 1 00:00:00.157 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.158 > git --version # timeout=10 00:00:00.223 > git --version # 'git version 2.39.2' 00:00:00.223 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.263 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.263 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.060 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.072 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.085 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.085 > git config core.sparsecheckout # timeout=10 00:00:06.098 > git read-tree -mu HEAD # timeout=10 00:00:06.115 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.130 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.130 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.231 [Pipeline] Start of Pipeline 00:00:06.242 [Pipeline] library 00:00:06.244 Loading library shm_lib@master 00:00:06.244 Library shm_lib@master is cached. Copying from home. 00:00:06.257 [Pipeline] node 00:00:06.267 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.268 [Pipeline] { 00:00:06.275 [Pipeline] catchError 00:00:06.276 [Pipeline] { 00:00:06.287 [Pipeline] wrap 00:00:06.295 [Pipeline] { 00:00:06.304 [Pipeline] stage 00:00:06.306 [Pipeline] { (Prologue) 00:00:06.323 [Pipeline] echo 00:00:06.325 Node: VM-host-WFP1 00:00:06.331 [Pipeline] cleanWs 00:00:06.340 [WS-CLEANUP] Deleting project workspace... 00:00:06.340 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.346 [WS-CLEANUP] done 00:00:06.548 [Pipeline] setCustomBuildProperty 00:00:06.618 [Pipeline] httpRequest 00:00:06.967 [Pipeline] echo 00:00:06.969 Sorcerer 10.211.164.101 is alive 00:00:06.979 [Pipeline] retry 00:00:06.981 [Pipeline] { 00:00:06.995 [Pipeline] httpRequest 00:00:06.999 HttpMethod: GET 00:00:07.000 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.001 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.017 Response Code: HTTP/1.1 200 OK 00:00:07.017 Success: Status code 200 is in the accepted range: 200,404 00:00:07.018 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.774 [Pipeline] } 00:00:12.790 [Pipeline] // retry 00:00:12.798 [Pipeline] sh 00:00:13.080 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:13.092 [Pipeline] httpRequest 00:00:14.179 [Pipeline] echo 00:00:14.180 Sorcerer 10.211.164.101 is alive 00:00:14.190 [Pipeline] retry 00:00:14.192 [Pipeline] { 00:00:14.203 [Pipeline] httpRequest 00:00:14.208 HttpMethod: GET 00:00:14.208 URL: http://10.211.164.101/packages/spdk_8053cd6b8f8ed48dce8f8f22117219c22438e9a7.tar.gz 00:00:14.209 Sending request to url: http://10.211.164.101/packages/spdk_8053cd6b8f8ed48dce8f8f22117219c22438e9a7.tar.gz 00:00:14.233 Response Code: HTTP/1.1 200 OK 00:00:14.234 Success: Status code 200 is in the accepted range: 200,404 00:00:14.234 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_8053cd6b8f8ed48dce8f8f22117219c22438e9a7.tar.gz 00:02:03.303 [Pipeline] } 00:02:03.321 [Pipeline] // retry 00:02:03.330 [Pipeline] sh 00:02:03.613 + tar --no-same-owner -xf spdk_8053cd6b8f8ed48dce8f8f22117219c22438e9a7.tar.gz 00:02:06.216 [Pipeline] sh 00:02:06.526 + git -C spdk log --oneline -n5 00:02:06.526 8053cd6b8 test/iscsi_tgt: Remove support for the namespace arg 00:02:06.526 461b97702 test/nvmf: Solve ambiguity around $NVMF_SECOND_TARGET_IP 00:02:06.526 4c618f461 test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:02:06.526 a51629061 test/nvmf: Remove all transport conditions from the test suites 00:02:06.526 9f70a047a test/nvmf: Drop $RDMA_IP_LIST 00:02:06.546 [Pipeline] writeFile 00:02:06.561 [Pipeline] sh 00:02:06.845 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:06.856 [Pipeline] sh 00:02:07.138 + cat autorun-spdk.conf 00:02:07.138 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.138 SPDK_TEST_NVME=1 00:02:07.138 SPDK_TEST_FTL=1 00:02:07.138 SPDK_TEST_ISAL=1 00:02:07.138 SPDK_RUN_ASAN=1 00:02:07.138 SPDK_RUN_UBSAN=1 00:02:07.138 SPDK_TEST_XNVME=1 00:02:07.138 SPDK_TEST_NVME_FDP=1 00:02:07.138 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.146 RUN_NIGHTLY=0 00:02:07.148 [Pipeline] } 00:02:07.161 [Pipeline] // stage 00:02:07.176 [Pipeline] stage 00:02:07.178 [Pipeline] { (Run VM) 00:02:07.191 [Pipeline] sh 00:02:07.473 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:07.473 + echo 'Start stage prepare_nvme.sh' 00:02:07.473 Start stage prepare_nvme.sh 00:02:07.473 + [[ -n 1 ]] 00:02:07.473 + disk_prefix=ex1 00:02:07.473 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:07.473 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:07.473 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:07.473 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.473 ++ SPDK_TEST_NVME=1 00:02:07.473 ++ SPDK_TEST_FTL=1 00:02:07.473 ++ SPDK_TEST_ISAL=1 00:02:07.473 ++ SPDK_RUN_ASAN=1 00:02:07.473 ++ SPDK_RUN_UBSAN=1 00:02:07.473 ++ SPDK_TEST_XNVME=1 00:02:07.473 ++ SPDK_TEST_NVME_FDP=1 00:02:07.473 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.473 ++ RUN_NIGHTLY=0 00:02:07.473 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:07.473 + nvme_files=() 00:02:07.473 + declare -A nvme_files 00:02:07.473 + backend_dir=/var/lib/libvirt/images/backends 00:02:07.473 + nvme_files['nvme.img']=5G 00:02:07.473 + nvme_files['nvme-cmb.img']=5G 00:02:07.473 + nvme_files['nvme-multi0.img']=4G 00:02:07.473 + nvme_files['nvme-multi1.img']=4G 00:02:07.473 + nvme_files['nvme-multi2.img']=4G 00:02:07.473 + nvme_files['nvme-openstack.img']=8G 00:02:07.473 + nvme_files['nvme-zns.img']=5G 00:02:07.473 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:07.473 + (( SPDK_TEST_FTL == 1 )) 00:02:07.473 + nvme_files["nvme-ftl.img"]=6G 00:02:07.473 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:07.473 + nvme_files["nvme-fdp.img"]=1G 00:02:07.473 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:07.473 + for nvme in "${!nvme_files[@]}" 00:02:07.473 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:02:07.473 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:07.473 + for nvme in "${!nvme_files[@]}" 00:02:07.473 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:02:07.733 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:07.733 + for nvme in "${!nvme_files[@]}" 00:02:07.733 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:02:07.733 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:07.733 + for nvme in "${!nvme_files[@]}" 00:02:07.733 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:02:07.733 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:07.733 + for nvme in "${!nvme_files[@]}" 00:02:07.733 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:02:07.733 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:07.733 + for nvme in "${!nvme_files[@]}" 00:02:07.733 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:02:07.992 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:07.992 + for nvme in "${!nvme_files[@]}" 00:02:07.992 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:02:08.251 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:08.251 + for nvme in "${!nvme_files[@]}" 00:02:08.251 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:02:08.251 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:08.251 + for nvme in "${!nvme_files[@]}" 00:02:08.251 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:02:08.511 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:08.511 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:02:08.511 + echo 'End stage prepare_nvme.sh' 00:02:08.511 End stage prepare_nvme.sh 00:02:08.521 [Pipeline] sh 00:02:08.833 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:08.833 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:08.833 00:02:08.833 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:08.833 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:08.833 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:08.833 HELP=0 00:02:08.833 DRY_RUN=0 00:02:08.833 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:02:08.833 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:08.833 NVME_AUTO_CREATE=0 00:02:08.833 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:02:08.833 NVME_CMB=,,,, 00:02:08.833 NVME_PMR=,,,, 00:02:08.833 NVME_ZNS=,,,, 00:02:08.833 NVME_MS=true,,,, 00:02:08.833 NVME_FDP=,,,on, 00:02:08.833 SPDK_VAGRANT_DISTRO=fedora39 00:02:08.833 SPDK_VAGRANT_VMCPU=10 00:02:08.833 SPDK_VAGRANT_VMRAM=12288 00:02:08.833 SPDK_VAGRANT_PROVIDER=libvirt 00:02:08.833 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:08.833 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:08.833 SPDK_OPENSTACK_NETWORK=0 00:02:08.833 VAGRANT_PACKAGE_BOX=0 00:02:08.833 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:08.833 FORCE_DISTRO=true 00:02:08.833 VAGRANT_BOX_VERSION= 00:02:08.833 EXTRA_VAGRANTFILES= 00:02:08.833 NIC_MODEL=e1000 00:02:08.833 00:02:08.833 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:08.833 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:11.367 Bringing machine 'default' up with 'libvirt' provider... 00:02:12.304 ==> default: Creating image (snapshot of base box volume). 00:02:12.563 ==> default: Creating domain with the following settings... 00:02:12.563 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730829101_8bb5701fba73021876ad 00:02:12.563 ==> default: -- Domain type: kvm 00:02:12.563 ==> default: -- Cpus: 10 00:02:12.563 ==> default: -- Feature: acpi 00:02:12.563 ==> default: -- Feature: apic 00:02:12.563 ==> default: -- Feature: pae 00:02:12.563 ==> default: -- Memory: 12288M 00:02:12.563 ==> default: -- Memory Backing: hugepages: 00:02:12.563 ==> default: -- Management MAC: 00:02:12.563 ==> default: -- Loader: 00:02:12.563 ==> default: -- Nvram: 00:02:12.563 ==> default: -- Base box: spdk/fedora39 00:02:12.563 ==> default: -- Storage pool: default 00:02:12.563 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730829101_8bb5701fba73021876ad.img (20G) 00:02:12.563 ==> default: -- Volume Cache: default 00:02:12.563 ==> default: -- Kernel: 00:02:12.563 ==> default: -- Initrd: 00:02:12.563 ==> default: -- Graphics Type: vnc 00:02:12.563 ==> default: -- Graphics Port: -1 00:02:12.563 ==> default: -- Graphics IP: 127.0.0.1 00:02:12.563 ==> default: -- Graphics Password: Not defined 00:02:12.563 ==> default: -- Video Type: cirrus 00:02:12.563 ==> default: -- Video VRAM: 9216 00:02:12.563 ==> default: -- Sound Type: 00:02:12.563 ==> default: -- Keymap: en-us 00:02:12.563 ==> default: -- TPM Path: 00:02:12.563 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:12.563 ==> default: -- Command line args: 00:02:12.563 ==> default: -> value=-device, 00:02:12.563 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:12.564 ==> default: -> value=-drive, 00:02:12.564 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:12.564 ==> default: -> value=-device, 00:02:12.564 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:12.564 ==> default: -> value=-device, 00:02:12.564 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:12.564 ==> default: -> value=-drive, 00:02:12.564 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:02:12.564 ==> default: -> value=-device, 00:02:12.564 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.564 ==> default: -> value=-device, 00:02:12.564 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:12.564 ==> default: -> value=-drive, 00:02:12.564 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:12.564 ==> default: -> value=-device, 00:02:12.564 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.564 ==> default: -> value=-drive, 00:02:12.564 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:12.564 ==> default: -> value=-device, 00:02:12.564 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.564 ==> default: -> value=-drive, 00:02:12.564 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:12.564 ==> default: -> value=-device, 00:02:12.564 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.564 ==> default: -> value=-device, 00:02:12.564 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:12.564 ==> default: -> value=-device, 00:02:12.564 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:12.564 ==> default: -> value=-drive, 00:02:12.564 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:12.564 ==> default: -> value=-device, 00:02:12.564 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.823 ==> default: Creating shared folders metadata... 00:02:12.823 ==> default: Starting domain. 00:02:14.728 ==> default: Waiting for domain to get an IP address... 00:02:32.822 ==> default: Waiting for SSH to become available... 00:02:32.822 ==> default: Configuring and enabling network interfaces... 00:02:37.042 default: SSH address: 192.168.121.228:22 00:02:37.042 default: SSH username: vagrant 00:02:37.042 default: SSH auth method: private key 00:02:40.329 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:48.451 ==> default: Mounting SSHFS shared folder... 00:02:50.990 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:50.990 ==> default: Checking Mount.. 00:02:52.375 ==> default: Folder Successfully Mounted! 00:02:52.375 ==> default: Running provisioner: file... 00:02:53.757 default: ~/.gitconfig => .gitconfig 00:02:54.023 00:02:54.023 SUCCESS! 00:02:54.023 00:02:54.023 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:54.023 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:54.023 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:54.023 00:02:54.087 [Pipeline] } 00:02:54.102 [Pipeline] // stage 00:02:54.110 [Pipeline] dir 00:02:54.111 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:54.113 [Pipeline] { 00:02:54.125 [Pipeline] catchError 00:02:54.127 [Pipeline] { 00:02:54.138 [Pipeline] sh 00:02:54.422 + vagrant ssh-config --host vagrant 00:02:54.422 + sed -ne /^Host/,$p 00:02:54.422 + tee ssh_conf 00:02:57.712 Host vagrant 00:02:57.712 HostName 192.168.121.228 00:02:57.712 User vagrant 00:02:57.712 Port 22 00:02:57.712 UserKnownHostsFile /dev/null 00:02:57.712 StrictHostKeyChecking no 00:02:57.712 PasswordAuthentication no 00:02:57.712 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:57.712 IdentitiesOnly yes 00:02:57.712 LogLevel FATAL 00:02:57.712 ForwardAgent yes 00:02:57.712 ForwardX11 yes 00:02:57.712 00:02:57.727 [Pipeline] withEnv 00:02:57.729 [Pipeline] { 00:02:57.744 [Pipeline] sh 00:02:58.070 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:58.070 source /etc/os-release 00:02:58.070 [[ -e /image.version ]] && img=$(< /image.version) 00:02:58.070 # Minimal, systemd-like check. 00:02:58.070 if [[ -e /.dockerenv ]]; then 00:02:58.070 # Clear garbage from the node's name: 00:02:58.070 # agt-er_autotest_547-896 -> autotest_547-896 00:02:58.070 # $HOSTNAME is the actual container id 00:02:58.071 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:58.071 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:58.071 # We can assume this is a mount from a host where container is running, 00:02:58.071 # so fetch its hostname to easily identify the target swarm worker. 00:02:58.071 container="$(< /etc/hostname) ($agent)" 00:02:58.071 else 00:02:58.071 # Fallback 00:02:58.071 container=$agent 00:02:58.071 fi 00:02:58.071 fi 00:02:58.071 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:58.071 00:02:58.342 [Pipeline] } 00:02:58.358 [Pipeline] // withEnv 00:02:58.368 [Pipeline] setCustomBuildProperty 00:02:58.384 [Pipeline] stage 00:02:58.386 [Pipeline] { (Tests) 00:02:58.403 [Pipeline] sh 00:02:58.687 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:58.959 [Pipeline] sh 00:02:59.238 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:59.513 [Pipeline] timeout 00:02:59.513 Timeout set to expire in 50 min 00:02:59.515 [Pipeline] { 00:02:59.529 [Pipeline] sh 00:02:59.820 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:00.390 HEAD is now at 8053cd6b8 test/iscsi_tgt: Remove support for the namespace arg 00:03:00.405 [Pipeline] sh 00:03:00.694 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:00.968 [Pipeline] sh 00:03:01.252 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:01.528 [Pipeline] sh 00:03:01.811 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:03:02.073 ++ readlink -f spdk_repo 00:03:02.073 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:02.073 + [[ -n /home/vagrant/spdk_repo ]] 00:03:02.073 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:02.073 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:02.073 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:02.073 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:02.073 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:02.073 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:02.073 + cd /home/vagrant/spdk_repo 00:03:02.073 + source /etc/os-release 00:03:02.073 ++ NAME='Fedora Linux' 00:03:02.073 ++ VERSION='39 (Cloud Edition)' 00:03:02.073 ++ ID=fedora 00:03:02.073 ++ VERSION_ID=39 00:03:02.073 ++ VERSION_CODENAME= 00:03:02.073 ++ PLATFORM_ID=platform:f39 00:03:02.073 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:02.073 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:02.073 ++ LOGO=fedora-logo-icon 00:03:02.073 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:02.073 ++ HOME_URL=https://fedoraproject.org/ 00:03:02.073 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:02.073 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:02.073 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:02.073 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:02.073 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:02.073 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:02.073 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:02.073 ++ SUPPORT_END=2024-11-12 00:03:02.073 ++ VARIANT='Cloud Edition' 00:03:02.073 ++ VARIANT_ID=cloud 00:03:02.073 + uname -a 00:03:02.073 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:02.073 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:02.641 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:02.899 Hugepages 00:03:02.899 node hugesize free / total 00:03:02.899 node0 1048576kB 0 / 0 00:03:02.899 node0 2048kB 0 / 0 00:03:02.899 00:03:02.899 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:02.899 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:02.899 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:02.899 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:02.899 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:03.159 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:03.159 + rm -f /tmp/spdk-ld-path 00:03:03.159 + source autorun-spdk.conf 00:03:03.159 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:03.159 ++ SPDK_TEST_NVME=1 00:03:03.159 ++ SPDK_TEST_FTL=1 00:03:03.159 ++ SPDK_TEST_ISAL=1 00:03:03.159 ++ SPDK_RUN_ASAN=1 00:03:03.159 ++ SPDK_RUN_UBSAN=1 00:03:03.159 ++ SPDK_TEST_XNVME=1 00:03:03.159 ++ SPDK_TEST_NVME_FDP=1 00:03:03.159 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:03.159 ++ RUN_NIGHTLY=0 00:03:03.159 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:03.159 + [[ -n '' ]] 00:03:03.159 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:03.159 + for M in /var/spdk/build-*-manifest.txt 00:03:03.159 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:03.159 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.159 + for M in /var/spdk/build-*-manifest.txt 00:03:03.159 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:03.159 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.159 + for M in /var/spdk/build-*-manifest.txt 00:03:03.159 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:03.159 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:03.159 ++ uname 00:03:03.159 + [[ Linux == \L\i\n\u\x ]] 00:03:03.159 + sudo dmesg -T 00:03:03.159 + sudo dmesg --clear 00:03:03.159 + dmesg_pid=5249 00:03:03.159 + sudo dmesg -Tw 00:03:03.159 + [[ Fedora Linux == FreeBSD ]] 00:03:03.159 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:03.159 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:03.159 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:03.159 + [[ -x /usr/src/fio-static/fio ]] 00:03:03.159 + export FIO_BIN=/usr/src/fio-static/fio 00:03:03.159 + FIO_BIN=/usr/src/fio-static/fio 00:03:03.159 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:03.159 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:03.159 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:03.159 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:03.159 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:03.159 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:03.159 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:03.159 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:03.159 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:03.418 17:52:32 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:03:03.418 17:52:32 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:03.418 17:52:32 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:03.418 17:52:32 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:03.418 17:52:32 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:03.418 17:52:32 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:03.418 17:52:32 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:03.418 17:52:32 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:03.418 17:52:32 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:03.418 17:52:32 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:03.418 17:52:32 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:03.418 17:52:32 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:03:03.418 17:52:32 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:03.418 17:52:32 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:03.418 17:52:32 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:03:03.418 17:52:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:03.418 17:52:32 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:03.418 17:52:32 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:03.418 17:52:32 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:03.418 17:52:32 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:03.418 17:52:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.418 17:52:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.418 17:52:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.418 17:52:32 -- paths/export.sh@5 -- $ export PATH 00:03:03.418 17:52:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:03.418 17:52:32 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:03.418 17:52:32 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:03.418 17:52:32 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730829152.XXXXXX 00:03:03.418 17:52:32 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730829152.2npbMm 00:03:03.418 17:52:32 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:03.418 17:52:32 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:03.418 17:52:32 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:03.418 17:52:32 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:03.418 17:52:32 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:03.418 17:52:32 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:03.418 17:52:32 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:03.418 17:52:32 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.418 17:52:32 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:03:03.418 17:52:32 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:03.418 17:52:32 -- pm/common@17 -- $ local monitor 00:03:03.418 17:52:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:03.418 17:52:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:03.418 17:52:32 -- pm/common@25 -- $ sleep 1 00:03:03.418 17:52:32 -- pm/common@21 -- $ date +%s 00:03:03.418 17:52:32 -- pm/common@21 -- $ date +%s 00:03:03.418 17:52:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730829152 00:03:03.418 17:52:32 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730829152 00:03:03.676 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730829152_collect-vmstat.pm.log 00:03:03.676 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730829152_collect-cpu-load.pm.log 00:03:04.613 17:52:33 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:04.613 17:52:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:04.613 17:52:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:04.613 17:52:33 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:04.613 17:52:33 -- spdk/autobuild.sh@16 -- $ date -u 00:03:04.613 Tue Nov 5 05:52:33 PM UTC 2024 00:03:04.613 17:52:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:04.613 v25.01-pre-166-g8053cd6b8 00:03:04.613 17:52:33 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:04.613 17:52:33 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:04.613 17:52:33 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:04.613 17:52:33 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:04.613 17:52:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.613 ************************************ 00:03:04.613 START TEST asan 00:03:04.613 ************************************ 00:03:04.613 using asan 00:03:04.613 17:52:33 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:03:04.613 00:03:04.613 real 0m0.000s 00:03:04.613 user 0m0.000s 00:03:04.613 sys 0m0.000s 00:03:04.613 17:52:33 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:04.613 17:52:33 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:04.613 ************************************ 00:03:04.613 END TEST asan 00:03:04.613 ************************************ 00:03:04.613 17:52:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:04.613 17:52:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:04.613 17:52:33 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:04.613 17:52:33 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:04.613 17:52:33 -- common/autotest_common.sh@10 -- $ set +x 00:03:04.613 ************************************ 00:03:04.613 START TEST ubsan 00:03:04.613 ************************************ 00:03:04.613 using ubsan 00:03:04.613 17:52:33 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:03:04.613 00:03:04.613 real 0m0.000s 00:03:04.613 user 0m0.000s 00:03:04.613 sys 0m0.000s 00:03:04.613 17:52:33 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:04.613 17:52:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:04.613 ************************************ 00:03:04.613 END TEST ubsan 00:03:04.613 ************************************ 00:03:04.613 17:52:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:04.613 17:52:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:04.613 17:52:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:04.613 17:52:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:04.613 17:52:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:04.613 17:52:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:04.613 17:52:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:04.613 17:52:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:04.613 17:52:33 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:03:04.872 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:04.872 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:05.440 Using 'verbs' RDMA provider 00:03:21.293 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:39.387 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:39.387 Creating mk/config.mk...done. 00:03:39.387 Creating mk/cc.flags.mk...done. 00:03:39.387 Type 'make' to build. 00:03:39.387 17:53:06 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:39.387 17:53:06 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:39.387 17:53:06 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:39.387 17:53:06 -- common/autotest_common.sh@10 -- $ set +x 00:03:39.387 ************************************ 00:03:39.387 START TEST make 00:03:39.387 ************************************ 00:03:39.387 17:53:07 make -- common/autotest_common.sh@1127 -- $ make -j10 00:03:39.387 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:39.387 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:39.387 meson setup builddir \ 00:03:39.387 -Dwith-libaio=enabled \ 00:03:39.387 -Dwith-liburing=enabled \ 00:03:39.387 -Dwith-libvfn=disabled \ 00:03:39.387 -Dwith-spdk=disabled \ 00:03:39.387 -Dexamples=false \ 00:03:39.387 -Dtests=false \ 00:03:39.387 -Dtools=false && \ 00:03:39.387 meson compile -C builddir && \ 00:03:39.387 cd -) 00:03:39.387 make[1]: Nothing to be done for 'all'. 00:03:40.324 The Meson build system 00:03:40.324 Version: 1.5.0 00:03:40.324 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:40.324 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:40.324 Build type: native build 00:03:40.324 Project name: xnvme 00:03:40.324 Project version: 0.7.5 00:03:40.324 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:40.324 C linker for the host machine: cc ld.bfd 2.40-14 00:03:40.324 Host machine cpu family: x86_64 00:03:40.324 Host machine cpu: x86_64 00:03:40.324 Message: host_machine.system: linux 00:03:40.324 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:40.324 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:40.324 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:40.324 Run-time dependency threads found: YES 00:03:40.324 Has header "setupapi.h" : NO 00:03:40.324 Has header "linux/blkzoned.h" : YES 00:03:40.324 Has header "linux/blkzoned.h" : YES (cached) 00:03:40.324 Has header "libaio.h" : YES 00:03:40.324 Library aio found: YES 00:03:40.324 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:40.324 Run-time dependency liburing found: YES 2.2 00:03:40.324 Dependency libvfn skipped: feature with-libvfn disabled 00:03:40.324 Found CMake: /usr/bin/cmake (3.27.7) 00:03:40.324 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:40.324 Subproject spdk : skipped: feature with-spdk disabled 00:03:40.324 Run-time dependency appleframeworks found: NO (tried framework) 00:03:40.324 Run-time dependency appleframeworks found: NO (tried framework) 00:03:40.324 Library rt found: YES 00:03:40.324 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:40.324 Configuring xnvme_config.h using configuration 00:03:40.324 Configuring xnvme.spec using configuration 00:03:40.324 Run-time dependency bash-completion found: YES 2.11 00:03:40.324 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:40.324 Program cp found: YES (/usr/bin/cp) 00:03:40.324 Build targets in project: 3 00:03:40.324 00:03:40.324 xnvme 0.7.5 00:03:40.324 00:03:40.324 Subprojects 00:03:40.324 spdk : NO Feature 'with-spdk' disabled 00:03:40.324 00:03:40.324 User defined options 00:03:40.324 examples : false 00:03:40.324 tests : false 00:03:40.324 tools : false 00:03:40.324 with-libaio : enabled 00:03:40.324 with-liburing: enabled 00:03:40.324 with-libvfn : disabled 00:03:40.324 with-spdk : disabled 00:03:40.324 00:03:40.324 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:40.583 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:40.583 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:40.841 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:40.841 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:40.841 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:40.841 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:40.841 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:40.841 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:40.841 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:40.841 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:40.841 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:40.841 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:40.841 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:40.841 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:40.841 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:40.841 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:40.841 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:40.841 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:40.842 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:40.842 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:40.842 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:40.842 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:40.842 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:41.102 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:41.102 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:41.102 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:41.102 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:41.102 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:41.102 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:41.102 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:41.102 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:41.102 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:41.102 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:41.102 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:41.102 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:41.102 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:41.102 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:41.102 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:41.102 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:41.102 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:41.102 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:41.102 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:41.102 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:41.102 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:41.102 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:41.102 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:41.102 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:41.102 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:41.102 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:41.102 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:41.102 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:41.102 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:41.102 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:41.102 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:41.102 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:41.102 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:41.102 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:41.361 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:41.361 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:41.361 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:41.361 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:41.361 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:41.361 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:41.361 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:41.361 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:41.361 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:41.361 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:41.361 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:41.361 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:41.361 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:41.361 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:41.361 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:41.361 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:41.361 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:41.929 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:41.929 [75/76] Linking static target lib/libxnvme.a 00:03:41.929 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:41.929 INFO: autodetecting backend as ninja 00:03:41.929 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:41.929 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:48.499 The Meson build system 00:03:48.499 Version: 1.5.0 00:03:48.499 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:48.499 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:48.499 Build type: native build 00:03:48.499 Program cat found: YES (/usr/bin/cat) 00:03:48.499 Project name: DPDK 00:03:48.499 Project version: 24.03.0 00:03:48.499 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:48.499 C linker for the host machine: cc ld.bfd 2.40-14 00:03:48.499 Host machine cpu family: x86_64 00:03:48.499 Host machine cpu: x86_64 00:03:48.499 Message: ## Building in Developer Mode ## 00:03:48.499 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:48.499 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:48.499 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:48.499 Program python3 found: YES (/usr/bin/python3) 00:03:48.499 Program cat found: YES (/usr/bin/cat) 00:03:48.499 Compiler for C supports arguments -march=native: YES 00:03:48.499 Checking for size of "void *" : 8 00:03:48.499 Checking for size of "void *" : 8 (cached) 00:03:48.499 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:48.499 Library m found: YES 00:03:48.499 Library numa found: YES 00:03:48.499 Has header "numaif.h" : YES 00:03:48.499 Library fdt found: NO 00:03:48.499 Library execinfo found: NO 00:03:48.499 Has header "execinfo.h" : YES 00:03:48.499 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:48.499 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:48.499 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:48.499 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:48.499 Run-time dependency openssl found: YES 3.1.1 00:03:48.499 Run-time dependency libpcap found: YES 1.10.4 00:03:48.499 Has header "pcap.h" with dependency libpcap: YES 00:03:48.499 Compiler for C supports arguments -Wcast-qual: YES 00:03:48.499 Compiler for C supports arguments -Wdeprecated: YES 00:03:48.499 Compiler for C supports arguments -Wformat: YES 00:03:48.499 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:48.499 Compiler for C supports arguments -Wformat-security: NO 00:03:48.499 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:48.499 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:48.499 Compiler for C supports arguments -Wnested-externs: YES 00:03:48.499 Compiler for C supports arguments -Wold-style-definition: YES 00:03:48.499 Compiler for C supports arguments -Wpointer-arith: YES 00:03:48.499 Compiler for C supports arguments -Wsign-compare: YES 00:03:48.499 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:48.499 Compiler for C supports arguments -Wundef: YES 00:03:48.499 Compiler for C supports arguments -Wwrite-strings: YES 00:03:48.499 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:48.499 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:48.499 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:48.499 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:48.499 Program objdump found: YES (/usr/bin/objdump) 00:03:48.499 Compiler for C supports arguments -mavx512f: YES 00:03:48.499 Checking if "AVX512 checking" compiles: YES 00:03:48.499 Fetching value of define "__SSE4_2__" : 1 00:03:48.499 Fetching value of define "__AES__" : 1 00:03:48.499 Fetching value of define "__AVX__" : 1 00:03:48.499 Fetching value of define "__AVX2__" : 1 00:03:48.499 Fetching value of define "__AVX512BW__" : 1 00:03:48.499 Fetching value of define "__AVX512CD__" : 1 00:03:48.499 Fetching value of define "__AVX512DQ__" : 1 00:03:48.499 Fetching value of define "__AVX512F__" : 1 00:03:48.499 Fetching value of define "__AVX512VL__" : 1 00:03:48.499 Fetching value of define "__PCLMUL__" : 1 00:03:48.499 Fetching value of define "__RDRND__" : 1 00:03:48.499 Fetching value of define "__RDSEED__" : 1 00:03:48.499 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:48.499 Fetching value of define "__znver1__" : (undefined) 00:03:48.499 Fetching value of define "__znver2__" : (undefined) 00:03:48.499 Fetching value of define "__znver3__" : (undefined) 00:03:48.499 Fetching value of define "__znver4__" : (undefined) 00:03:48.499 Library asan found: YES 00:03:48.499 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:48.499 Message: lib/log: Defining dependency "log" 00:03:48.499 Message: lib/kvargs: Defining dependency "kvargs" 00:03:48.499 Message: lib/telemetry: Defining dependency "telemetry" 00:03:48.499 Library rt found: YES 00:03:48.499 Checking for function "getentropy" : NO 00:03:48.499 Message: lib/eal: Defining dependency "eal" 00:03:48.499 Message: lib/ring: Defining dependency "ring" 00:03:48.499 Message: lib/rcu: Defining dependency "rcu" 00:03:48.499 Message: lib/mempool: Defining dependency "mempool" 00:03:48.499 Message: lib/mbuf: Defining dependency "mbuf" 00:03:48.499 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:48.499 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:48.499 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:48.499 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:48.499 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:48.499 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:48.499 Compiler for C supports arguments -mpclmul: YES 00:03:48.499 Compiler for C supports arguments -maes: YES 00:03:48.499 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:48.499 Compiler for C supports arguments -mavx512bw: YES 00:03:48.499 Compiler for C supports arguments -mavx512dq: YES 00:03:48.499 Compiler for C supports arguments -mavx512vl: YES 00:03:48.499 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:48.499 Compiler for C supports arguments -mavx2: YES 00:03:48.499 Compiler for C supports arguments -mavx: YES 00:03:48.499 Message: lib/net: Defining dependency "net" 00:03:48.499 Message: lib/meter: Defining dependency "meter" 00:03:48.499 Message: lib/ethdev: Defining dependency "ethdev" 00:03:48.499 Message: lib/pci: Defining dependency "pci" 00:03:48.499 Message: lib/cmdline: Defining dependency "cmdline" 00:03:48.499 Message: lib/hash: Defining dependency "hash" 00:03:48.499 Message: lib/timer: Defining dependency "timer" 00:03:48.499 Message: lib/compressdev: Defining dependency "compressdev" 00:03:48.499 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:48.499 Message: lib/dmadev: Defining dependency "dmadev" 00:03:48.499 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:48.499 Message: lib/power: Defining dependency "power" 00:03:48.499 Message: lib/reorder: Defining dependency "reorder" 00:03:48.499 Message: lib/security: Defining dependency "security" 00:03:48.499 Has header "linux/userfaultfd.h" : YES 00:03:48.499 Has header "linux/vduse.h" : YES 00:03:48.499 Message: lib/vhost: Defining dependency "vhost" 00:03:48.499 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:48.499 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:48.499 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:48.499 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:48.499 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:48.499 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:48.499 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:48.499 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:48.499 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:48.499 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:48.499 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:48.499 Configuring doxy-api-html.conf using configuration 00:03:48.499 Configuring doxy-api-man.conf using configuration 00:03:48.499 Program mandb found: YES (/usr/bin/mandb) 00:03:48.499 Program sphinx-build found: NO 00:03:48.499 Configuring rte_build_config.h using configuration 00:03:48.499 Message: 00:03:48.499 ================= 00:03:48.499 Applications Enabled 00:03:48.499 ================= 00:03:48.499 00:03:48.499 apps: 00:03:48.499 00:03:48.499 00:03:48.499 Message: 00:03:48.499 ================= 00:03:48.499 Libraries Enabled 00:03:48.499 ================= 00:03:48.499 00:03:48.499 libs: 00:03:48.499 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:48.499 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:48.499 cryptodev, dmadev, power, reorder, security, vhost, 00:03:48.499 00:03:48.499 Message: 00:03:48.499 =============== 00:03:48.499 Drivers Enabled 00:03:48.499 =============== 00:03:48.499 00:03:48.499 common: 00:03:48.499 00:03:48.499 bus: 00:03:48.499 pci, vdev, 00:03:48.499 mempool: 00:03:48.499 ring, 00:03:48.499 dma: 00:03:48.499 00:03:48.499 net: 00:03:48.499 00:03:48.499 crypto: 00:03:48.499 00:03:48.499 compress: 00:03:48.499 00:03:48.499 vdpa: 00:03:48.499 00:03:48.499 00:03:48.499 Message: 00:03:48.499 ================= 00:03:48.499 Content Skipped 00:03:48.499 ================= 00:03:48.499 00:03:48.499 apps: 00:03:48.499 dumpcap: explicitly disabled via build config 00:03:48.499 graph: explicitly disabled via build config 00:03:48.499 pdump: explicitly disabled via build config 00:03:48.499 proc-info: explicitly disabled via build config 00:03:48.499 test-acl: explicitly disabled via build config 00:03:48.499 test-bbdev: explicitly disabled via build config 00:03:48.499 test-cmdline: explicitly disabled via build config 00:03:48.499 test-compress-perf: explicitly disabled via build config 00:03:48.499 test-crypto-perf: explicitly disabled via build config 00:03:48.499 test-dma-perf: explicitly disabled via build config 00:03:48.499 test-eventdev: explicitly disabled via build config 00:03:48.500 test-fib: explicitly disabled via build config 00:03:48.500 test-flow-perf: explicitly disabled via build config 00:03:48.500 test-gpudev: explicitly disabled via build config 00:03:48.500 test-mldev: explicitly disabled via build config 00:03:48.500 test-pipeline: explicitly disabled via build config 00:03:48.500 test-pmd: explicitly disabled via build config 00:03:48.500 test-regex: explicitly disabled via build config 00:03:48.500 test-sad: explicitly disabled via build config 00:03:48.500 test-security-perf: explicitly disabled via build config 00:03:48.500 00:03:48.500 libs: 00:03:48.500 argparse: explicitly disabled via build config 00:03:48.500 metrics: explicitly disabled via build config 00:03:48.500 acl: explicitly disabled via build config 00:03:48.500 bbdev: explicitly disabled via build config 00:03:48.500 bitratestats: explicitly disabled via build config 00:03:48.500 bpf: explicitly disabled via build config 00:03:48.500 cfgfile: explicitly disabled via build config 00:03:48.500 distributor: explicitly disabled via build config 00:03:48.500 efd: explicitly disabled via build config 00:03:48.500 eventdev: explicitly disabled via build config 00:03:48.500 dispatcher: explicitly disabled via build config 00:03:48.500 gpudev: explicitly disabled via build config 00:03:48.500 gro: explicitly disabled via build config 00:03:48.500 gso: explicitly disabled via build config 00:03:48.500 ip_frag: explicitly disabled via build config 00:03:48.500 jobstats: explicitly disabled via build config 00:03:48.500 latencystats: explicitly disabled via build config 00:03:48.500 lpm: explicitly disabled via build config 00:03:48.500 member: explicitly disabled via build config 00:03:48.500 pcapng: explicitly disabled via build config 00:03:48.500 rawdev: explicitly disabled via build config 00:03:48.500 regexdev: explicitly disabled via build config 00:03:48.500 mldev: explicitly disabled via build config 00:03:48.500 rib: explicitly disabled via build config 00:03:48.500 sched: explicitly disabled via build config 00:03:48.500 stack: explicitly disabled via build config 00:03:48.500 ipsec: explicitly disabled via build config 00:03:48.500 pdcp: explicitly disabled via build config 00:03:48.500 fib: explicitly disabled via build config 00:03:48.500 port: explicitly disabled via build config 00:03:48.500 pdump: explicitly disabled via build config 00:03:48.500 table: explicitly disabled via build config 00:03:48.500 pipeline: explicitly disabled via build config 00:03:48.500 graph: explicitly disabled via build config 00:03:48.500 node: explicitly disabled via build config 00:03:48.500 00:03:48.500 drivers: 00:03:48.500 common/cpt: not in enabled drivers build config 00:03:48.500 common/dpaax: not in enabled drivers build config 00:03:48.500 common/iavf: not in enabled drivers build config 00:03:48.500 common/idpf: not in enabled drivers build config 00:03:48.500 common/ionic: not in enabled drivers build config 00:03:48.500 common/mvep: not in enabled drivers build config 00:03:48.500 common/octeontx: not in enabled drivers build config 00:03:48.500 bus/auxiliary: not in enabled drivers build config 00:03:48.500 bus/cdx: not in enabled drivers build config 00:03:48.500 bus/dpaa: not in enabled drivers build config 00:03:48.500 bus/fslmc: not in enabled drivers build config 00:03:48.500 bus/ifpga: not in enabled drivers build config 00:03:48.500 bus/platform: not in enabled drivers build config 00:03:48.500 bus/uacce: not in enabled drivers build config 00:03:48.500 bus/vmbus: not in enabled drivers build config 00:03:48.500 common/cnxk: not in enabled drivers build config 00:03:48.500 common/mlx5: not in enabled drivers build config 00:03:48.500 common/nfp: not in enabled drivers build config 00:03:48.500 common/nitrox: not in enabled drivers build config 00:03:48.500 common/qat: not in enabled drivers build config 00:03:48.500 common/sfc_efx: not in enabled drivers build config 00:03:48.500 mempool/bucket: not in enabled drivers build config 00:03:48.500 mempool/cnxk: not in enabled drivers build config 00:03:48.500 mempool/dpaa: not in enabled drivers build config 00:03:48.500 mempool/dpaa2: not in enabled drivers build config 00:03:48.500 mempool/octeontx: not in enabled drivers build config 00:03:48.500 mempool/stack: not in enabled drivers build config 00:03:48.500 dma/cnxk: not in enabled drivers build config 00:03:48.500 dma/dpaa: not in enabled drivers build config 00:03:48.500 dma/dpaa2: not in enabled drivers build config 00:03:48.500 dma/hisilicon: not in enabled drivers build config 00:03:48.500 dma/idxd: not in enabled drivers build config 00:03:48.500 dma/ioat: not in enabled drivers build config 00:03:48.500 dma/skeleton: not in enabled drivers build config 00:03:48.500 net/af_packet: not in enabled drivers build config 00:03:48.500 net/af_xdp: not in enabled drivers build config 00:03:48.500 net/ark: not in enabled drivers build config 00:03:48.500 net/atlantic: not in enabled drivers build config 00:03:48.500 net/avp: not in enabled drivers build config 00:03:48.500 net/axgbe: not in enabled drivers build config 00:03:48.500 net/bnx2x: not in enabled drivers build config 00:03:48.500 net/bnxt: not in enabled drivers build config 00:03:48.500 net/bonding: not in enabled drivers build config 00:03:48.500 net/cnxk: not in enabled drivers build config 00:03:48.500 net/cpfl: not in enabled drivers build config 00:03:48.500 net/cxgbe: not in enabled drivers build config 00:03:48.500 net/dpaa: not in enabled drivers build config 00:03:48.500 net/dpaa2: not in enabled drivers build config 00:03:48.500 net/e1000: not in enabled drivers build config 00:03:48.500 net/ena: not in enabled drivers build config 00:03:48.500 net/enetc: not in enabled drivers build config 00:03:48.500 net/enetfec: not in enabled drivers build config 00:03:48.500 net/enic: not in enabled drivers build config 00:03:48.500 net/failsafe: not in enabled drivers build config 00:03:48.500 net/fm10k: not in enabled drivers build config 00:03:48.500 net/gve: not in enabled drivers build config 00:03:48.500 net/hinic: not in enabled drivers build config 00:03:48.500 net/hns3: not in enabled drivers build config 00:03:48.500 net/i40e: not in enabled drivers build config 00:03:48.500 net/iavf: not in enabled drivers build config 00:03:48.500 net/ice: not in enabled drivers build config 00:03:48.500 net/idpf: not in enabled drivers build config 00:03:48.500 net/igc: not in enabled drivers build config 00:03:48.500 net/ionic: not in enabled drivers build config 00:03:48.500 net/ipn3ke: not in enabled drivers build config 00:03:48.500 net/ixgbe: not in enabled drivers build config 00:03:48.500 net/mana: not in enabled drivers build config 00:03:48.500 net/memif: not in enabled drivers build config 00:03:48.500 net/mlx4: not in enabled drivers build config 00:03:48.500 net/mlx5: not in enabled drivers build config 00:03:48.500 net/mvneta: not in enabled drivers build config 00:03:48.500 net/mvpp2: not in enabled drivers build config 00:03:48.500 net/netvsc: not in enabled drivers build config 00:03:48.500 net/nfb: not in enabled drivers build config 00:03:48.500 net/nfp: not in enabled drivers build config 00:03:48.500 net/ngbe: not in enabled drivers build config 00:03:48.500 net/null: not in enabled drivers build config 00:03:48.500 net/octeontx: not in enabled drivers build config 00:03:48.500 net/octeon_ep: not in enabled drivers build config 00:03:48.500 net/pcap: not in enabled drivers build config 00:03:48.500 net/pfe: not in enabled drivers build config 00:03:48.500 net/qede: not in enabled drivers build config 00:03:48.500 net/ring: not in enabled drivers build config 00:03:48.500 net/sfc: not in enabled drivers build config 00:03:48.500 net/softnic: not in enabled drivers build config 00:03:48.500 net/tap: not in enabled drivers build config 00:03:48.500 net/thunderx: not in enabled drivers build config 00:03:48.500 net/txgbe: not in enabled drivers build config 00:03:48.500 net/vdev_netvsc: not in enabled drivers build config 00:03:48.500 net/vhost: not in enabled drivers build config 00:03:48.500 net/virtio: not in enabled drivers build config 00:03:48.500 net/vmxnet3: not in enabled drivers build config 00:03:48.500 raw/*: missing internal dependency, "rawdev" 00:03:48.500 crypto/armv8: not in enabled drivers build config 00:03:48.500 crypto/bcmfs: not in enabled drivers build config 00:03:48.500 crypto/caam_jr: not in enabled drivers build config 00:03:48.500 crypto/ccp: not in enabled drivers build config 00:03:48.500 crypto/cnxk: not in enabled drivers build config 00:03:48.500 crypto/dpaa_sec: not in enabled drivers build config 00:03:48.500 crypto/dpaa2_sec: not in enabled drivers build config 00:03:48.500 crypto/ipsec_mb: not in enabled drivers build config 00:03:48.500 crypto/mlx5: not in enabled drivers build config 00:03:48.500 crypto/mvsam: not in enabled drivers build config 00:03:48.500 crypto/nitrox: not in enabled drivers build config 00:03:48.500 crypto/null: not in enabled drivers build config 00:03:48.500 crypto/octeontx: not in enabled drivers build config 00:03:48.500 crypto/openssl: not in enabled drivers build config 00:03:48.500 crypto/scheduler: not in enabled drivers build config 00:03:48.500 crypto/uadk: not in enabled drivers build config 00:03:48.500 crypto/virtio: not in enabled drivers build config 00:03:48.500 compress/isal: not in enabled drivers build config 00:03:48.500 compress/mlx5: not in enabled drivers build config 00:03:48.500 compress/nitrox: not in enabled drivers build config 00:03:48.500 compress/octeontx: not in enabled drivers build config 00:03:48.500 compress/zlib: not in enabled drivers build config 00:03:48.500 regex/*: missing internal dependency, "regexdev" 00:03:48.500 ml/*: missing internal dependency, "mldev" 00:03:48.500 vdpa/ifc: not in enabled drivers build config 00:03:48.500 vdpa/mlx5: not in enabled drivers build config 00:03:48.500 vdpa/nfp: not in enabled drivers build config 00:03:48.500 vdpa/sfc: not in enabled drivers build config 00:03:48.500 event/*: missing internal dependency, "eventdev" 00:03:48.500 baseband/*: missing internal dependency, "bbdev" 00:03:48.500 gpu/*: missing internal dependency, "gpudev" 00:03:48.500 00:03:48.500 00:03:48.759 Build targets in project: 85 00:03:48.759 00:03:48.759 DPDK 24.03.0 00:03:48.759 00:03:48.759 User defined options 00:03:48.759 buildtype : debug 00:03:48.759 default_library : shared 00:03:48.759 libdir : lib 00:03:48.759 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:48.759 b_sanitize : address 00:03:48.759 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:48.759 c_link_args : 00:03:48.759 cpu_instruction_set: native 00:03:48.759 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:48.759 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:48.759 enable_docs : false 00:03:48.759 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:48.759 enable_kmods : false 00:03:48.759 max_lcores : 128 00:03:48.759 tests : false 00:03:48.759 00:03:48.759 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:49.326 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:49.326 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:49.326 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:49.326 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:49.326 [4/268] Linking static target lib/librte_kvargs.a 00:03:49.326 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:49.326 [6/268] Linking static target lib/librte_log.a 00:03:49.584 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:49.584 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:49.842 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.842 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:49.842 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:49.842 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:49.842 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:49.842 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:49.842 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:49.842 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:49.842 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:49.842 [18/268] Linking static target lib/librte_telemetry.a 00:03:50.408 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.408 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:50.408 [21/268] Linking target lib/librte_log.so.24.1 00:03:50.408 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:50.408 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:50.408 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:50.408 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:50.408 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:50.408 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:50.666 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:50.666 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:50.666 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:50.666 [31/268] Linking target lib/librte_kvargs.so.24.1 00:03:50.666 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:50.666 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.923 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:50.923 [35/268] Linking target lib/librte_telemetry.so.24.1 00:03:50.923 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:50.923 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:50.923 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:50.923 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:50.923 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:51.182 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:51.182 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:51.182 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:51.182 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:51.182 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:51.182 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:51.182 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:51.440 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:51.440 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:51.698 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:51.698 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:51.698 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:51.698 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:51.698 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:51.698 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:51.956 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:51.956 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:51.956 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:51.956 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:52.213 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:52.213 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:52.213 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:52.213 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:52.213 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:52.213 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:52.213 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:52.213 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:52.472 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:52.472 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:52.730 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:52.731 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:52.731 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:52.731 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:52.731 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:52.731 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:52.731 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:52.989 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:52.989 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:52.989 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:52.989 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:52.989 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:53.247 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:53.247 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:53.247 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:53.247 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:53.247 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:53.505 [87/268] Linking static target lib/librte_eal.a 00:03:53.505 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:53.505 [89/268] Linking static target lib/librte_rcu.a 00:03:53.505 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:53.505 [91/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:53.505 [92/268] Linking static target lib/librte_ring.a 00:03:53.505 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:53.762 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:53.762 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:53.762 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:53.762 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:53.762 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:53.762 [99/268] Linking static target lib/librte_mempool.a 00:03:54.019 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.019 [101/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.019 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:54.019 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:54.019 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:54.277 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:54.277 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:54.277 [107/268] Linking static target lib/librte_net.a 00:03:54.277 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:54.277 [109/268] Linking static target lib/librte_meter.a 00:03:54.536 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:54.536 [111/268] Linking static target lib/librte_mbuf.a 00:03:54.536 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:54.536 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:54.536 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:54.536 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:54.794 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.794 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.053 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:55.053 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:55.053 [120/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.311 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:55.311 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:55.570 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:55.570 [124/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.570 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:55.570 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:55.570 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:55.570 [128/268] Linking static target lib/librte_pci.a 00:03:55.829 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:55.829 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:55.829 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:55.829 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:55.829 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:56.088 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:56.088 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:56.088 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:56.088 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:56.088 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.088 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:56.088 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:56.088 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:56.088 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:56.088 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:56.088 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:56.088 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:56.346 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:56.346 [147/268] Linking static target lib/librte_cmdline.a 00:03:56.346 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:56.605 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:56.605 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:56.605 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:56.605 [152/268] Linking static target lib/librte_timer.a 00:03:56.864 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:56.864 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:56.864 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:56.864 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:57.123 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:57.123 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:57.123 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.123 [160/268] Linking static target lib/librte_compressdev.a 00:03:57.382 [161/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:57.382 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:57.382 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:57.382 [164/268] Linking static target lib/librte_hash.a 00:03:57.382 [165/268] Linking static target lib/librte_ethdev.a 00:03:57.382 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:57.640 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:57.640 [168/268] Linking static target lib/librte_dmadev.a 00:03:57.640 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:57.640 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:57.640 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:57.640 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:57.899 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.899 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:58.158 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:58.158 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.158 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:58.417 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:58.417 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:58.417 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.417 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:58.417 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:58.417 [183/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.417 [184/268] Linking static target lib/librte_cryptodev.a 00:03:58.676 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:58.676 [186/268] Linking static target lib/librte_power.a 00:03:58.936 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:58.936 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:58.936 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:58.936 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:58.936 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:58.936 [192/268] Linking static target lib/librte_reorder.a 00:03:58.936 [193/268] Linking static target lib/librte_security.a 00:03:59.505 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:59.505 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.764 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.024 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:00.024 [198/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:00.024 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:00.024 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:00.283 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:00.283 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:00.283 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:00.542 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:00.542 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:00.542 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:00.801 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:00.801 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:00.801 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:00.801 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:00.801 [211/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:01.060 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:01.060 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:01.060 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:01.060 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:01.060 [216/268] Linking static target drivers/librte_bus_vdev.a 00:04:01.060 [217/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.060 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:01.060 [219/268] Linking static target drivers/librte_bus_pci.a 00:04:01.060 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:01.060 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:01.319 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.319 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:01.319 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:01.319 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:01.319 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:01.579 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.148 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:06.344 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:06.344 [230/268] Linking static target lib/librte_vhost.a 00:04:06.344 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.344 [232/268] Linking target lib/librte_eal.so.24.1 00:04:06.344 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:06.344 [234/268] Linking target lib/librte_ring.so.24.1 00:04:06.344 [235/268] Linking target lib/librte_meter.so.24.1 00:04:06.344 [236/268] Linking target lib/librte_timer.so.24.1 00:04:06.344 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:06.344 [238/268] Linking target lib/librte_dmadev.so.24.1 00:04:06.344 [239/268] Linking target lib/librte_pci.so.24.1 00:04:06.344 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:06.344 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:06.344 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:06.344 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:06.344 [244/268] Linking target lib/librte_mempool.so.24.1 00:04:06.344 [245/268] Linking target lib/librte_rcu.so.24.1 00:04:06.344 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:06.344 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:06.603 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:06.603 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:06.603 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:06.603 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:06.603 [252/268] Linking target lib/librte_mbuf.so.24.1 00:04:06.603 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:06.862 [254/268] Linking target lib/librte_reorder.so.24.1 00:04:06.862 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:04:06.862 [256/268] Linking target lib/librte_compressdev.so.24.1 00:04:06.862 [257/268] Linking target lib/librte_net.so.24.1 00:04:06.862 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:06.862 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:06.862 [260/268] Linking target lib/librte_hash.so.24.1 00:04:06.862 [261/268] Linking target lib/librte_cmdline.so.24.1 00:04:06.862 [262/268] Linking target lib/librte_security.so.24.1 00:04:06.862 [263/268] Linking target lib/librte_ethdev.so.24.1 00:04:07.122 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:07.122 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:07.122 [266/268] Linking target lib/librte_power.so.24.1 00:04:07.689 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.948 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:07.948 INFO: autodetecting backend as ninja 00:04:07.948 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:26.072 CC lib/ut/ut.o 00:04:26.072 CC lib/ut_mock/mock.o 00:04:26.072 CC lib/log/log.o 00:04:26.072 CC lib/log/log_flags.o 00:04:26.072 CC lib/log/log_deprecated.o 00:04:26.072 LIB libspdk_ut_mock.a 00:04:26.072 LIB libspdk_ut.a 00:04:26.072 LIB libspdk_log.a 00:04:26.072 SO libspdk_ut.so.2.0 00:04:26.072 SO libspdk_ut_mock.so.6.0 00:04:26.072 SO libspdk_log.so.7.1 00:04:26.072 SYMLINK libspdk_ut_mock.so 00:04:26.072 SYMLINK libspdk_ut.so 00:04:26.072 SYMLINK libspdk_log.so 00:04:26.072 CC lib/ioat/ioat.o 00:04:26.072 CC lib/util/base64.o 00:04:26.072 CC lib/util/bit_array.o 00:04:26.072 CC lib/util/crc16.o 00:04:26.072 CC lib/util/cpuset.o 00:04:26.072 CC lib/util/crc32.o 00:04:26.072 CC lib/util/crc32c.o 00:04:26.072 CXX lib/trace_parser/trace.o 00:04:26.072 CC lib/dma/dma.o 00:04:26.072 CC lib/vfio_user/host/vfio_user_pci.o 00:04:26.072 CC lib/util/crc32_ieee.o 00:04:26.072 CC lib/util/crc64.o 00:04:26.072 CC lib/util/dif.o 00:04:26.072 CC lib/util/fd.o 00:04:26.072 LIB libspdk_dma.a 00:04:26.072 CC lib/vfio_user/host/vfio_user.o 00:04:26.072 CC lib/util/fd_group.o 00:04:26.072 SO libspdk_dma.so.5.0 00:04:26.072 CC lib/util/file.o 00:04:26.072 CC lib/util/hexlify.o 00:04:26.072 LIB libspdk_ioat.a 00:04:26.072 SYMLINK libspdk_dma.so 00:04:26.072 CC lib/util/iov.o 00:04:26.072 CC lib/util/math.o 00:04:26.072 SO libspdk_ioat.so.7.0 00:04:26.072 CC lib/util/net.o 00:04:26.072 SYMLINK libspdk_ioat.so 00:04:26.072 CC lib/util/pipe.o 00:04:26.072 CC lib/util/strerror_tls.o 00:04:26.072 LIB libspdk_vfio_user.a 00:04:26.072 CC lib/util/string.o 00:04:26.072 SO libspdk_vfio_user.so.5.0 00:04:26.072 CC lib/util/uuid.o 00:04:26.072 CC lib/util/xor.o 00:04:26.072 CC lib/util/zipf.o 00:04:26.072 SYMLINK libspdk_vfio_user.so 00:04:26.072 CC lib/util/md5.o 00:04:26.072 LIB libspdk_util.a 00:04:26.072 SO libspdk_util.so.10.1 00:04:26.072 LIB libspdk_trace_parser.a 00:04:26.072 SO libspdk_trace_parser.so.6.0 00:04:26.072 SYMLINK libspdk_util.so 00:04:26.072 SYMLINK libspdk_trace_parser.so 00:04:26.072 CC lib/conf/conf.o 00:04:26.072 CC lib/idxd/idxd.o 00:04:26.072 CC lib/idxd/idxd_user.o 00:04:26.072 CC lib/rdma_provider/common.o 00:04:26.072 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:26.072 CC lib/idxd/idxd_kernel.o 00:04:26.072 CC lib/vmd/vmd.o 00:04:26.072 CC lib/env_dpdk/env.o 00:04:26.072 CC lib/rdma_utils/rdma_utils.o 00:04:26.072 CC lib/json/json_parse.o 00:04:26.072 CC lib/json/json_util.o 00:04:26.073 CC lib/json/json_write.o 00:04:26.073 LIB libspdk_rdma_provider.a 00:04:26.073 SO libspdk_rdma_provider.so.6.0 00:04:26.073 LIB libspdk_conf.a 00:04:26.073 CC lib/vmd/led.o 00:04:26.073 SO libspdk_conf.so.6.0 00:04:26.073 LIB libspdk_rdma_utils.a 00:04:26.073 SYMLINK libspdk_rdma_provider.so 00:04:26.073 CC lib/env_dpdk/memory.o 00:04:26.073 CC lib/env_dpdk/pci.o 00:04:26.073 SO libspdk_rdma_utils.so.1.0 00:04:26.073 SYMLINK libspdk_conf.so 00:04:26.073 CC lib/env_dpdk/init.o 00:04:26.073 SYMLINK libspdk_rdma_utils.so 00:04:26.073 CC lib/env_dpdk/threads.o 00:04:26.073 CC lib/env_dpdk/pci_ioat.o 00:04:26.073 CC lib/env_dpdk/pci_virtio.o 00:04:26.073 LIB libspdk_json.a 00:04:26.073 CC lib/env_dpdk/pci_vmd.o 00:04:26.073 SO libspdk_json.so.6.0 00:04:26.073 CC lib/env_dpdk/pci_idxd.o 00:04:26.073 CC lib/env_dpdk/pci_event.o 00:04:26.073 SYMLINK libspdk_json.so 00:04:26.073 CC lib/env_dpdk/sigbus_handler.o 00:04:26.073 CC lib/env_dpdk/pci_dpdk.o 00:04:26.073 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:26.073 LIB libspdk_idxd.a 00:04:26.073 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:26.073 SO libspdk_idxd.so.12.1 00:04:26.073 LIB libspdk_vmd.a 00:04:26.073 SYMLINK libspdk_idxd.so 00:04:26.073 SO libspdk_vmd.so.6.0 00:04:26.073 SYMLINK libspdk_vmd.so 00:04:26.073 CC lib/jsonrpc/jsonrpc_server.o 00:04:26.073 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:26.073 CC lib/jsonrpc/jsonrpc_client.o 00:04:26.073 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:26.073 LIB libspdk_jsonrpc.a 00:04:26.073 SO libspdk_jsonrpc.so.6.0 00:04:26.073 SYMLINK libspdk_jsonrpc.so 00:04:26.332 LIB libspdk_env_dpdk.a 00:04:26.590 SO libspdk_env_dpdk.so.15.1 00:04:26.590 CC lib/rpc/rpc.o 00:04:26.590 SYMLINK libspdk_env_dpdk.so 00:04:26.849 LIB libspdk_rpc.a 00:04:26.849 SO libspdk_rpc.so.6.0 00:04:26.849 SYMLINK libspdk_rpc.so 00:04:27.416 CC lib/notify/notify_rpc.o 00:04:27.416 CC lib/notify/notify.o 00:04:27.416 CC lib/trace/trace_flags.o 00:04:27.416 CC lib/trace/trace.o 00:04:27.416 CC lib/trace/trace_rpc.o 00:04:27.416 CC lib/keyring/keyring.o 00:04:27.416 CC lib/keyring/keyring_rpc.o 00:04:27.416 LIB libspdk_notify.a 00:04:27.416 SO libspdk_notify.so.6.0 00:04:27.416 LIB libspdk_trace.a 00:04:27.416 LIB libspdk_keyring.a 00:04:27.416 SYMLINK libspdk_notify.so 00:04:27.674 SO libspdk_keyring.so.2.0 00:04:27.674 SO libspdk_trace.so.11.0 00:04:27.674 SYMLINK libspdk_keyring.so 00:04:27.674 SYMLINK libspdk_trace.so 00:04:27.933 CC lib/sock/sock.o 00:04:27.933 CC lib/sock/sock_rpc.o 00:04:27.933 CC lib/thread/thread.o 00:04:27.933 CC lib/thread/iobuf.o 00:04:28.606 LIB libspdk_sock.a 00:04:28.606 SO libspdk_sock.so.10.0 00:04:28.606 SYMLINK libspdk_sock.so 00:04:29.173 CC lib/nvme/nvme_ctrlr.o 00:04:29.173 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:29.173 CC lib/nvme/nvme_fabric.o 00:04:29.173 CC lib/nvme/nvme_ns_cmd.o 00:04:29.173 CC lib/nvme/nvme_ns.o 00:04:29.173 CC lib/nvme/nvme_pcie_common.o 00:04:29.173 CC lib/nvme/nvme_pcie.o 00:04:29.173 CC lib/nvme/nvme.o 00:04:29.173 CC lib/nvme/nvme_qpair.o 00:04:29.778 CC lib/nvme/nvme_quirks.o 00:04:29.778 LIB libspdk_thread.a 00:04:29.778 CC lib/nvme/nvme_transport.o 00:04:29.778 SO libspdk_thread.so.11.0 00:04:29.778 CC lib/nvme/nvme_discovery.o 00:04:29.778 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:29.778 SYMLINK libspdk_thread.so 00:04:29.778 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:29.778 CC lib/nvme/nvme_tcp.o 00:04:29.778 CC lib/nvme/nvme_opal.o 00:04:29.778 CC lib/nvme/nvme_io_msg.o 00:04:30.036 CC lib/nvme/nvme_poll_group.o 00:04:30.036 CC lib/nvme/nvme_zns.o 00:04:30.295 CC lib/nvme/nvme_stubs.o 00:04:30.295 CC lib/nvme/nvme_auth.o 00:04:30.295 CC lib/nvme/nvme_cuse.o 00:04:30.295 CC lib/nvme/nvme_rdma.o 00:04:30.552 CC lib/accel/accel.o 00:04:30.552 CC lib/blob/blobstore.o 00:04:30.552 CC lib/blob/request.o 00:04:30.552 CC lib/blob/zeroes.o 00:04:30.552 CC lib/blob/blob_bs_dev.o 00:04:31.120 CC lib/init/json_config.o 00:04:31.120 CC lib/virtio/virtio.o 00:04:31.120 CC lib/fsdev/fsdev.o 00:04:31.120 CC lib/init/subsystem.o 00:04:31.120 CC lib/init/subsystem_rpc.o 00:04:31.120 CC lib/init/rpc.o 00:04:31.378 CC lib/accel/accel_rpc.o 00:04:31.378 CC lib/accel/accel_sw.o 00:04:31.378 CC lib/fsdev/fsdev_io.o 00:04:31.378 CC lib/virtio/virtio_vhost_user.o 00:04:31.378 LIB libspdk_init.a 00:04:31.378 SO libspdk_init.so.6.0 00:04:31.378 CC lib/fsdev/fsdev_rpc.o 00:04:31.638 SYMLINK libspdk_init.so 00:04:31.638 CC lib/virtio/virtio_vfio_user.o 00:04:31.638 CC lib/virtio/virtio_pci.o 00:04:31.638 CC lib/event/app.o 00:04:31.638 CC lib/event/reactor.o 00:04:31.638 LIB libspdk_accel.a 00:04:31.638 CC lib/event/log_rpc.o 00:04:31.638 LIB libspdk_nvme.a 00:04:31.638 CC lib/event/app_rpc.o 00:04:31.638 SO libspdk_accel.so.16.0 00:04:31.638 LIB libspdk_fsdev.a 00:04:31.638 CC lib/event/scheduler_static.o 00:04:31.896 SO libspdk_fsdev.so.2.0 00:04:31.896 SYMLINK libspdk_accel.so 00:04:31.896 SYMLINK libspdk_fsdev.so 00:04:31.896 SO libspdk_nvme.so.15.0 00:04:31.896 LIB libspdk_virtio.a 00:04:31.896 CC lib/bdev/bdev.o 00:04:31.896 CC lib/bdev/bdev_rpc.o 00:04:31.896 SO libspdk_virtio.so.7.0 00:04:31.896 CC lib/bdev/bdev_zone.o 00:04:31.896 CC lib/bdev/part.o 00:04:32.154 SYMLINK libspdk_virtio.so 00:04:32.154 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:32.154 CC lib/bdev/scsi_nvme.o 00:04:32.154 LIB libspdk_event.a 00:04:32.154 SYMLINK libspdk_nvme.so 00:04:32.154 SO libspdk_event.so.14.0 00:04:32.413 SYMLINK libspdk_event.so 00:04:32.981 LIB libspdk_fuse_dispatcher.a 00:04:32.981 SO libspdk_fuse_dispatcher.so.1.0 00:04:32.981 SYMLINK libspdk_fuse_dispatcher.so 00:04:33.918 LIB libspdk_blob.a 00:04:34.191 SO libspdk_blob.so.11.0 00:04:34.191 SYMLINK libspdk_blob.so 00:04:34.776 CC lib/lvol/lvol.o 00:04:34.776 CC lib/blobfs/tree.o 00:04:34.776 CC lib/blobfs/blobfs.o 00:04:35.035 LIB libspdk_bdev.a 00:04:35.035 SO libspdk_bdev.so.17.0 00:04:35.035 SYMLINK libspdk_bdev.so 00:04:35.293 CC lib/ublk/ublk_rpc.o 00:04:35.293 CC lib/ublk/ublk.o 00:04:35.293 CC lib/nbd/nbd_rpc.o 00:04:35.293 CC lib/nvmf/ctrlr.o 00:04:35.293 CC lib/nbd/nbd.o 00:04:35.293 CC lib/nvmf/ctrlr_discovery.o 00:04:35.552 CC lib/scsi/dev.o 00:04:35.552 CC lib/ftl/ftl_core.o 00:04:35.552 LIB libspdk_blobfs.a 00:04:35.552 CC lib/scsi/lun.o 00:04:35.552 SO libspdk_blobfs.so.10.0 00:04:35.552 LIB libspdk_lvol.a 00:04:35.552 CC lib/ftl/ftl_init.o 00:04:35.552 SO libspdk_lvol.so.10.0 00:04:35.552 SYMLINK libspdk_blobfs.so 00:04:35.552 CC lib/nvmf/ctrlr_bdev.o 00:04:35.812 CC lib/nvmf/subsystem.o 00:04:35.812 SYMLINK libspdk_lvol.so 00:04:35.812 CC lib/nvmf/nvmf.o 00:04:35.812 CC lib/ftl/ftl_layout.o 00:04:35.812 LIB libspdk_nbd.a 00:04:35.812 SO libspdk_nbd.so.7.0 00:04:35.812 CC lib/ftl/ftl_debug.o 00:04:35.812 CC lib/scsi/port.o 00:04:35.812 CC lib/scsi/scsi.o 00:04:36.071 SYMLINK libspdk_nbd.so 00:04:36.071 CC lib/scsi/scsi_bdev.o 00:04:36.071 CC lib/scsi/scsi_pr.o 00:04:36.071 LIB libspdk_ublk.a 00:04:36.071 CC lib/nvmf/nvmf_rpc.o 00:04:36.071 SO libspdk_ublk.so.3.0 00:04:36.071 CC lib/scsi/scsi_rpc.o 00:04:36.071 CC lib/ftl/ftl_io.o 00:04:36.071 SYMLINK libspdk_ublk.so 00:04:36.071 CC lib/ftl/ftl_sb.o 00:04:36.330 CC lib/scsi/task.o 00:04:36.330 CC lib/nvmf/transport.o 00:04:36.330 CC lib/ftl/ftl_l2p.o 00:04:36.330 CC lib/ftl/ftl_l2p_flat.o 00:04:36.330 CC lib/nvmf/tcp.o 00:04:36.589 CC lib/nvmf/stubs.o 00:04:36.589 LIB libspdk_scsi.a 00:04:36.589 CC lib/ftl/ftl_nv_cache.o 00:04:36.589 CC lib/ftl/ftl_band.o 00:04:36.589 SO libspdk_scsi.so.9.0 00:04:36.589 CC lib/ftl/ftl_band_ops.o 00:04:36.589 SYMLINK libspdk_scsi.so 00:04:36.589 CC lib/ftl/ftl_writer.o 00:04:36.848 CC lib/ftl/ftl_rq.o 00:04:36.848 CC lib/nvmf/mdns_server.o 00:04:36.848 CC lib/ftl/ftl_reloc.o 00:04:36.848 CC lib/ftl/ftl_l2p_cache.o 00:04:37.106 CC lib/nvmf/rdma.o 00:04:37.106 CC lib/ftl/ftl_p2l.o 00:04:37.106 CC lib/ftl/ftl_p2l_log.o 00:04:37.364 CC lib/iscsi/conn.o 00:04:37.364 CC lib/iscsi/init_grp.o 00:04:37.364 CC lib/iscsi/iscsi.o 00:04:37.364 CC lib/iscsi/param.o 00:04:37.364 CC lib/iscsi/portal_grp.o 00:04:37.364 CC lib/iscsi/tgt_node.o 00:04:37.622 CC lib/ftl/mngt/ftl_mngt.o 00:04:37.622 CC lib/iscsi/iscsi_subsystem.o 00:04:37.622 CC lib/vhost/vhost.o 00:04:37.622 CC lib/iscsi/iscsi_rpc.o 00:04:37.622 CC lib/iscsi/task.o 00:04:37.880 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:37.880 CC lib/nvmf/auth.o 00:04:37.880 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:37.880 CC lib/vhost/vhost_rpc.o 00:04:38.137 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:38.137 CC lib/vhost/vhost_scsi.o 00:04:38.137 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:38.137 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:38.137 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:38.137 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:38.395 CC lib/vhost/vhost_blk.o 00:04:38.395 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:38.395 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:38.395 CC lib/vhost/rte_vhost_user.o 00:04:38.395 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:38.653 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:38.653 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:38.653 CC lib/ftl/utils/ftl_conf.o 00:04:38.653 CC lib/ftl/utils/ftl_md.o 00:04:38.653 CC lib/ftl/utils/ftl_mempool.o 00:04:38.911 CC lib/ftl/utils/ftl_bitmap.o 00:04:38.911 LIB libspdk_iscsi.a 00:04:38.911 CC lib/ftl/utils/ftl_property.o 00:04:38.911 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:38.911 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:38.911 SO libspdk_iscsi.so.8.0 00:04:38.911 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:39.169 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:39.169 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:39.169 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:39.169 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:39.169 SYMLINK libspdk_iscsi.so 00:04:39.169 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:39.169 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:39.169 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:39.169 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:39.169 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:39.428 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:39.428 CC lib/ftl/base/ftl_base_dev.o 00:04:39.428 CC lib/ftl/base/ftl_base_bdev.o 00:04:39.428 CC lib/ftl/ftl_trace.o 00:04:39.428 LIB libspdk_nvmf.a 00:04:39.428 LIB libspdk_vhost.a 00:04:39.428 SO libspdk_vhost.so.8.0 00:04:39.686 SO libspdk_nvmf.so.20.0 00:04:39.686 LIB libspdk_ftl.a 00:04:39.686 SYMLINK libspdk_vhost.so 00:04:39.944 SYMLINK libspdk_nvmf.so 00:04:39.944 SO libspdk_ftl.so.9.0 00:04:40.202 SYMLINK libspdk_ftl.so 00:04:40.768 CC module/env_dpdk/env_dpdk_rpc.o 00:04:40.768 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:40.768 CC module/fsdev/aio/fsdev_aio.o 00:04:40.768 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:40.768 CC module/scheduler/gscheduler/gscheduler.o 00:04:40.768 CC module/blob/bdev/blob_bdev.o 00:04:40.768 CC module/keyring/file/keyring.o 00:04:40.768 CC module/keyring/linux/keyring.o 00:04:40.768 CC module/sock/posix/posix.o 00:04:40.768 CC module/accel/error/accel_error.o 00:04:40.768 LIB libspdk_env_dpdk_rpc.a 00:04:40.768 SO libspdk_env_dpdk_rpc.so.6.0 00:04:40.768 SYMLINK libspdk_env_dpdk_rpc.so 00:04:40.768 CC module/accel/error/accel_error_rpc.o 00:04:40.768 LIB libspdk_scheduler_gscheduler.a 00:04:40.768 CC module/keyring/file/keyring_rpc.o 00:04:40.768 CC module/keyring/linux/keyring_rpc.o 00:04:40.768 SO libspdk_scheduler_gscheduler.so.4.0 00:04:40.768 LIB libspdk_scheduler_dpdk_governor.a 00:04:40.768 LIB libspdk_scheduler_dynamic.a 00:04:40.768 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:41.026 SO libspdk_scheduler_dynamic.so.4.0 00:04:41.026 SYMLINK libspdk_scheduler_gscheduler.so 00:04:41.026 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:41.026 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:41.026 CC module/fsdev/aio/linux_aio_mgr.o 00:04:41.026 LIB libspdk_keyring_linux.a 00:04:41.026 SYMLINK libspdk_scheduler_dynamic.so 00:04:41.026 LIB libspdk_accel_error.a 00:04:41.026 LIB libspdk_blob_bdev.a 00:04:41.026 LIB libspdk_keyring_file.a 00:04:41.026 SO libspdk_keyring_linux.so.1.0 00:04:41.026 SO libspdk_blob_bdev.so.11.0 00:04:41.026 SO libspdk_accel_error.so.2.0 00:04:41.026 SO libspdk_keyring_file.so.2.0 00:04:41.026 SYMLINK libspdk_blob_bdev.so 00:04:41.026 SYMLINK libspdk_keyring_linux.so 00:04:41.026 CC module/accel/ioat/accel_ioat.o 00:04:41.026 SYMLINK libspdk_keyring_file.so 00:04:41.026 SYMLINK libspdk_accel_error.so 00:04:41.026 CC module/accel/ioat/accel_ioat_rpc.o 00:04:41.026 CC module/accel/dsa/accel_dsa.o 00:04:41.026 CC module/accel/dsa/accel_dsa_rpc.o 00:04:41.284 CC module/accel/iaa/accel_iaa.o 00:04:41.284 CC module/accel/iaa/accel_iaa_rpc.o 00:04:41.284 LIB libspdk_accel_ioat.a 00:04:41.284 CC module/bdev/delay/vbdev_delay.o 00:04:41.284 SO libspdk_accel_ioat.so.6.0 00:04:41.284 CC module/blobfs/bdev/blobfs_bdev.o 00:04:41.284 CC module/bdev/error/vbdev_error.o 00:04:41.284 SYMLINK libspdk_accel_ioat.so 00:04:41.284 CC module/bdev/gpt/gpt.o 00:04:41.284 CC module/bdev/error/vbdev_error_rpc.o 00:04:41.284 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:41.284 LIB libspdk_fsdev_aio.a 00:04:41.284 LIB libspdk_accel_dsa.a 00:04:41.541 SO libspdk_fsdev_aio.so.1.0 00:04:41.541 LIB libspdk_accel_iaa.a 00:04:41.541 SO libspdk_accel_dsa.so.5.0 00:04:41.541 SO libspdk_accel_iaa.so.3.0 00:04:41.541 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:41.542 SYMLINK libspdk_fsdev_aio.so 00:04:41.542 SYMLINK libspdk_accel_dsa.so 00:04:41.542 LIB libspdk_sock_posix.a 00:04:41.542 SYMLINK libspdk_accel_iaa.so 00:04:41.542 CC module/bdev/gpt/vbdev_gpt.o 00:04:41.542 SO libspdk_sock_posix.so.6.0 00:04:41.542 LIB libspdk_bdev_error.a 00:04:41.801 SO libspdk_bdev_error.so.6.0 00:04:41.801 LIB libspdk_bdev_delay.a 00:04:41.801 SYMLINK libspdk_sock_posix.so 00:04:41.801 LIB libspdk_blobfs_bdev.a 00:04:41.801 CC module/bdev/lvol/vbdev_lvol.o 00:04:41.801 CC module/bdev/malloc/bdev_malloc.o 00:04:41.801 SO libspdk_blobfs_bdev.so.6.0 00:04:41.801 SO libspdk_bdev_delay.so.6.0 00:04:41.801 SYMLINK libspdk_bdev_error.so 00:04:41.801 CC module/bdev/null/bdev_null.o 00:04:41.801 CC module/bdev/nvme/bdev_nvme.o 00:04:41.801 CC module/bdev/passthru/vbdev_passthru.o 00:04:41.801 SYMLINK libspdk_blobfs_bdev.so 00:04:41.801 SYMLINK libspdk_bdev_delay.so 00:04:41.801 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:41.801 LIB libspdk_bdev_gpt.a 00:04:41.801 CC module/bdev/raid/bdev_raid.o 00:04:41.801 SO libspdk_bdev_gpt.so.6.0 00:04:42.061 CC module/bdev/split/vbdev_split.o 00:04:42.061 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:42.061 SYMLINK libspdk_bdev_gpt.so 00:04:42.061 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:42.061 CC module/bdev/split/vbdev_split_rpc.o 00:04:42.061 CC module/bdev/null/bdev_null_rpc.o 00:04:42.061 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:42.061 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:42.061 LIB libspdk_bdev_malloc.a 00:04:42.061 SO libspdk_bdev_malloc.so.6.0 00:04:42.061 LIB libspdk_bdev_split.a 00:04:42.061 LIB libspdk_bdev_null.a 00:04:42.320 SO libspdk_bdev_split.so.6.0 00:04:42.320 SO libspdk_bdev_null.so.6.0 00:04:42.320 LIB libspdk_bdev_passthru.a 00:04:42.320 SYMLINK libspdk_bdev_malloc.so 00:04:42.320 CC module/bdev/nvme/nvme_rpc.o 00:04:42.320 CC module/bdev/nvme/bdev_mdns_client.o 00:04:42.320 SO libspdk_bdev_passthru.so.6.0 00:04:42.320 SYMLINK libspdk_bdev_split.so 00:04:42.320 SYMLINK libspdk_bdev_null.so 00:04:42.320 CC module/bdev/raid/bdev_raid_rpc.o 00:04:42.320 CC module/bdev/nvme/vbdev_opal.o 00:04:42.320 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:42.320 SYMLINK libspdk_bdev_passthru.so 00:04:42.320 LIB libspdk_bdev_lvol.a 00:04:42.320 SO libspdk_bdev_lvol.so.6.0 00:04:42.320 CC module/bdev/raid/bdev_raid_sb.o 00:04:42.320 LIB libspdk_bdev_zone_block.a 00:04:42.320 SYMLINK libspdk_bdev_lvol.so 00:04:42.578 CC module/bdev/xnvme/bdev_xnvme.o 00:04:42.578 SO libspdk_bdev_zone_block.so.6.0 00:04:42.578 CC module/bdev/raid/raid0.o 00:04:42.578 CC module/bdev/raid/raid1.o 00:04:42.578 SYMLINK libspdk_bdev_zone_block.so 00:04:42.579 CC module/bdev/raid/concat.o 00:04:42.579 CC module/bdev/ftl/bdev_ftl.o 00:04:42.579 CC module/bdev/aio/bdev_aio.o 00:04:42.579 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:42.837 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:42.837 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:42.837 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:42.837 CC module/bdev/aio/bdev_aio_rpc.o 00:04:42.837 LIB libspdk_bdev_xnvme.a 00:04:42.837 SO libspdk_bdev_xnvme.so.3.0 00:04:42.837 LIB libspdk_bdev_ftl.a 00:04:43.096 SO libspdk_bdev_ftl.so.6.0 00:04:43.096 SYMLINK libspdk_bdev_xnvme.so 00:04:43.096 CC module/bdev/iscsi/bdev_iscsi.o 00:04:43.096 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:43.096 SYMLINK libspdk_bdev_ftl.so 00:04:43.096 LIB libspdk_bdev_aio.a 00:04:43.096 LIB libspdk_bdev_raid.a 00:04:43.096 SO libspdk_bdev_aio.so.6.0 00:04:43.096 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:43.096 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:43.096 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:43.096 SO libspdk_bdev_raid.so.6.0 00:04:43.096 SYMLINK libspdk_bdev_aio.so 00:04:43.354 SYMLINK libspdk_bdev_raid.so 00:04:43.354 LIB libspdk_bdev_iscsi.a 00:04:43.354 SO libspdk_bdev_iscsi.so.6.0 00:04:43.613 SYMLINK libspdk_bdev_iscsi.so 00:04:43.613 LIB libspdk_bdev_virtio.a 00:04:43.613 SO libspdk_bdev_virtio.so.6.0 00:04:43.871 SYMLINK libspdk_bdev_virtio.so 00:04:44.439 LIB libspdk_bdev_nvme.a 00:04:44.697 SO libspdk_bdev_nvme.so.7.1 00:04:44.697 SYMLINK libspdk_bdev_nvme.so 00:04:45.263 CC module/event/subsystems/vmd/vmd.o 00:04:45.263 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:45.263 CC module/event/subsystems/sock/sock.o 00:04:45.263 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:45.263 CC module/event/subsystems/iobuf/iobuf.o 00:04:45.264 CC module/event/subsystems/keyring/keyring.o 00:04:45.264 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:45.264 CC module/event/subsystems/fsdev/fsdev.o 00:04:45.264 CC module/event/subsystems/scheduler/scheduler.o 00:04:45.521 LIB libspdk_event_vmd.a 00:04:45.521 LIB libspdk_event_vhost_blk.a 00:04:45.521 LIB libspdk_event_sock.a 00:04:45.521 LIB libspdk_event_scheduler.a 00:04:45.521 LIB libspdk_event_fsdev.a 00:04:45.521 LIB libspdk_event_keyring.a 00:04:45.521 SO libspdk_event_vhost_blk.so.3.0 00:04:45.521 SO libspdk_event_sock.so.5.0 00:04:45.521 LIB libspdk_event_iobuf.a 00:04:45.521 SO libspdk_event_vmd.so.6.0 00:04:45.521 SO libspdk_event_scheduler.so.4.0 00:04:45.521 SO libspdk_event_fsdev.so.1.0 00:04:45.521 SO libspdk_event_keyring.so.1.0 00:04:45.521 SO libspdk_event_iobuf.so.3.0 00:04:45.521 SYMLINK libspdk_event_vhost_blk.so 00:04:45.521 SYMLINK libspdk_event_sock.so 00:04:45.521 SYMLINK libspdk_event_vmd.so 00:04:45.521 SYMLINK libspdk_event_fsdev.so 00:04:45.521 SYMLINK libspdk_event_scheduler.so 00:04:45.521 SYMLINK libspdk_event_keyring.so 00:04:45.521 SYMLINK libspdk_event_iobuf.so 00:04:46.087 CC module/event/subsystems/accel/accel.o 00:04:46.087 LIB libspdk_event_accel.a 00:04:46.345 SO libspdk_event_accel.so.6.0 00:04:46.345 SYMLINK libspdk_event_accel.so 00:04:46.603 CC module/event/subsystems/bdev/bdev.o 00:04:46.862 LIB libspdk_event_bdev.a 00:04:46.862 SO libspdk_event_bdev.so.6.0 00:04:47.120 SYMLINK libspdk_event_bdev.so 00:04:47.378 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:47.378 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:47.378 CC module/event/subsystems/scsi/scsi.o 00:04:47.378 CC module/event/subsystems/nbd/nbd.o 00:04:47.378 CC module/event/subsystems/ublk/ublk.o 00:04:47.378 LIB libspdk_event_scsi.a 00:04:47.378 LIB libspdk_event_nbd.a 00:04:47.378 LIB libspdk_event_ublk.a 00:04:47.636 SO libspdk_event_scsi.so.6.0 00:04:47.636 SO libspdk_event_ublk.so.3.0 00:04:47.636 SO libspdk_event_nbd.so.6.0 00:04:47.636 LIB libspdk_event_nvmf.a 00:04:47.636 SYMLINK libspdk_event_ublk.so 00:04:47.636 SO libspdk_event_nvmf.so.6.0 00:04:47.636 SYMLINK libspdk_event_scsi.so 00:04:47.636 SYMLINK libspdk_event_nbd.so 00:04:47.636 SYMLINK libspdk_event_nvmf.so 00:04:47.894 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:47.894 CC module/event/subsystems/iscsi/iscsi.o 00:04:48.152 LIB libspdk_event_vhost_scsi.a 00:04:48.152 LIB libspdk_event_iscsi.a 00:04:48.152 SO libspdk_event_vhost_scsi.so.3.0 00:04:48.152 SO libspdk_event_iscsi.so.6.0 00:04:48.152 SYMLINK libspdk_event_vhost_scsi.so 00:04:48.409 SYMLINK libspdk_event_iscsi.so 00:04:48.409 SO libspdk.so.6.0 00:04:48.667 SYMLINK libspdk.so 00:04:48.925 CC app/trace_record/trace_record.o 00:04:48.925 CC app/spdk_lspci/spdk_lspci.o 00:04:48.925 CXX app/trace/trace.o 00:04:48.925 CC app/spdk_nvme_perf/perf.o 00:04:48.925 CC app/spdk_nvme_identify/identify.o 00:04:48.925 CC app/nvmf_tgt/nvmf_main.o 00:04:48.925 CC app/iscsi_tgt/iscsi_tgt.o 00:04:48.925 CC test/thread/poller_perf/poller_perf.o 00:04:48.925 CC examples/util/zipf/zipf.o 00:04:48.925 CC app/spdk_tgt/spdk_tgt.o 00:04:48.925 LINK spdk_lspci 00:04:48.925 LINK nvmf_tgt 00:04:49.184 LINK zipf 00:04:49.184 LINK poller_perf 00:04:49.184 LINK spdk_trace_record 00:04:49.184 LINK iscsi_tgt 00:04:49.184 LINK spdk_tgt 00:04:49.184 LINK spdk_trace 00:04:49.442 CC app/spdk_nvme_discover/discovery_aer.o 00:04:49.442 CC test/dma/test_dma/test_dma.o 00:04:49.442 CC app/spdk_top/spdk_top.o 00:04:49.442 CC examples/ioat/perf/perf.o 00:04:49.442 CC examples/vmd/lsvmd/lsvmd.o 00:04:49.442 CC examples/vmd/led/led.o 00:04:49.442 CC examples/idxd/perf/perf.o 00:04:49.442 LINK spdk_nvme_discover 00:04:49.442 LINK lsvmd 00:04:49.700 LINK led 00:04:49.700 CC test/app/bdev_svc/bdev_svc.o 00:04:49.700 LINK ioat_perf 00:04:49.700 LINK spdk_nvme_perf 00:04:49.700 LINK spdk_nvme_identify 00:04:49.700 LINK bdev_svc 00:04:49.700 CC test/app/histogram_perf/histogram_perf.o 00:04:49.958 CC examples/ioat/verify/verify.o 00:04:49.958 LINK idxd_perf 00:04:49.958 LINK test_dma 00:04:49.958 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:49.959 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:49.959 LINK histogram_perf 00:04:49.959 CC test/app/jsoncat/jsoncat.o 00:04:50.218 LINK verify 00:04:50.218 TEST_HEADER include/spdk/accel.h 00:04:50.218 TEST_HEADER include/spdk/accel_module.h 00:04:50.218 TEST_HEADER include/spdk/assert.h 00:04:50.218 TEST_HEADER include/spdk/barrier.h 00:04:50.218 TEST_HEADER include/spdk/base64.h 00:04:50.218 LINK interrupt_tgt 00:04:50.218 TEST_HEADER include/spdk/bdev.h 00:04:50.218 TEST_HEADER include/spdk/bdev_module.h 00:04:50.218 TEST_HEADER include/spdk/bdev_zone.h 00:04:50.218 TEST_HEADER include/spdk/bit_array.h 00:04:50.218 TEST_HEADER include/spdk/bit_pool.h 00:04:50.218 TEST_HEADER include/spdk/blob_bdev.h 00:04:50.218 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:50.218 TEST_HEADER include/spdk/blobfs.h 00:04:50.218 TEST_HEADER include/spdk/blob.h 00:04:50.218 TEST_HEADER include/spdk/conf.h 00:04:50.218 TEST_HEADER include/spdk/config.h 00:04:50.218 TEST_HEADER include/spdk/cpuset.h 00:04:50.218 CC examples/thread/thread/thread_ex.o 00:04:50.218 TEST_HEADER include/spdk/crc16.h 00:04:50.218 TEST_HEADER include/spdk/crc32.h 00:04:50.218 TEST_HEADER include/spdk/crc64.h 00:04:50.218 TEST_HEADER include/spdk/dif.h 00:04:50.218 TEST_HEADER include/spdk/dma.h 00:04:50.218 TEST_HEADER include/spdk/endian.h 00:04:50.218 TEST_HEADER include/spdk/env_dpdk.h 00:04:50.218 TEST_HEADER include/spdk/env.h 00:04:50.218 TEST_HEADER include/spdk/event.h 00:04:50.218 TEST_HEADER include/spdk/fd_group.h 00:04:50.218 TEST_HEADER include/spdk/fd.h 00:04:50.218 TEST_HEADER include/spdk/file.h 00:04:50.218 TEST_HEADER include/spdk/fsdev.h 00:04:50.218 CC examples/sock/hello_world/hello_sock.o 00:04:50.218 TEST_HEADER include/spdk/fsdev_module.h 00:04:50.218 TEST_HEADER include/spdk/ftl.h 00:04:50.218 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:50.218 TEST_HEADER include/spdk/gpt_spec.h 00:04:50.218 TEST_HEADER include/spdk/hexlify.h 00:04:50.218 TEST_HEADER include/spdk/histogram_data.h 00:04:50.218 TEST_HEADER include/spdk/idxd.h 00:04:50.218 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:50.218 TEST_HEADER include/spdk/idxd_spec.h 00:04:50.218 TEST_HEADER include/spdk/init.h 00:04:50.218 LINK jsoncat 00:04:50.218 TEST_HEADER include/spdk/ioat.h 00:04:50.218 TEST_HEADER include/spdk/ioat_spec.h 00:04:50.218 TEST_HEADER include/spdk/iscsi_spec.h 00:04:50.218 TEST_HEADER include/spdk/json.h 00:04:50.218 TEST_HEADER include/spdk/jsonrpc.h 00:04:50.218 TEST_HEADER include/spdk/keyring.h 00:04:50.218 TEST_HEADER include/spdk/keyring_module.h 00:04:50.218 TEST_HEADER include/spdk/likely.h 00:04:50.218 TEST_HEADER include/spdk/log.h 00:04:50.218 TEST_HEADER include/spdk/lvol.h 00:04:50.218 TEST_HEADER include/spdk/md5.h 00:04:50.218 TEST_HEADER include/spdk/memory.h 00:04:50.218 CC test/app/stub/stub.o 00:04:50.218 TEST_HEADER include/spdk/mmio.h 00:04:50.218 TEST_HEADER include/spdk/nbd.h 00:04:50.218 TEST_HEADER include/spdk/net.h 00:04:50.218 TEST_HEADER include/spdk/notify.h 00:04:50.218 TEST_HEADER include/spdk/nvme.h 00:04:50.218 TEST_HEADER include/spdk/nvme_intel.h 00:04:50.218 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:50.218 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:50.218 TEST_HEADER include/spdk/nvme_spec.h 00:04:50.218 TEST_HEADER include/spdk/nvme_zns.h 00:04:50.218 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:50.218 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:50.218 TEST_HEADER include/spdk/nvmf.h 00:04:50.218 TEST_HEADER include/spdk/nvmf_spec.h 00:04:50.218 TEST_HEADER include/spdk/nvmf_transport.h 00:04:50.218 TEST_HEADER include/spdk/opal.h 00:04:50.218 TEST_HEADER include/spdk/opal_spec.h 00:04:50.218 TEST_HEADER include/spdk/pci_ids.h 00:04:50.218 TEST_HEADER include/spdk/pipe.h 00:04:50.218 TEST_HEADER include/spdk/queue.h 00:04:50.218 TEST_HEADER include/spdk/reduce.h 00:04:50.218 TEST_HEADER include/spdk/rpc.h 00:04:50.218 TEST_HEADER include/spdk/scheduler.h 00:04:50.218 TEST_HEADER include/spdk/scsi.h 00:04:50.218 TEST_HEADER include/spdk/scsi_spec.h 00:04:50.218 TEST_HEADER include/spdk/sock.h 00:04:50.218 TEST_HEADER include/spdk/stdinc.h 00:04:50.218 TEST_HEADER include/spdk/string.h 00:04:50.218 TEST_HEADER include/spdk/thread.h 00:04:50.218 TEST_HEADER include/spdk/trace.h 00:04:50.218 TEST_HEADER include/spdk/trace_parser.h 00:04:50.218 TEST_HEADER include/spdk/tree.h 00:04:50.218 TEST_HEADER include/spdk/ublk.h 00:04:50.218 TEST_HEADER include/spdk/util.h 00:04:50.218 TEST_HEADER include/spdk/uuid.h 00:04:50.218 TEST_HEADER include/spdk/version.h 00:04:50.218 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:50.218 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:50.218 TEST_HEADER include/spdk/vhost.h 00:04:50.218 TEST_HEADER include/spdk/vmd.h 00:04:50.218 TEST_HEADER include/spdk/xor.h 00:04:50.218 TEST_HEADER include/spdk/zipf.h 00:04:50.218 CXX test/cpp_headers/accel.o 00:04:50.218 CXX test/cpp_headers/accel_module.o 00:04:50.218 LINK nvme_fuzz 00:04:50.479 CC app/spdk_dd/spdk_dd.o 00:04:50.479 LINK stub 00:04:50.479 LINK thread 00:04:50.479 LINK spdk_top 00:04:50.479 LINK hello_sock 00:04:50.479 CXX test/cpp_headers/assert.o 00:04:50.479 CC app/fio/nvme/fio_plugin.o 00:04:50.479 CXX test/cpp_headers/barrier.o 00:04:50.479 CC app/fio/bdev/fio_plugin.o 00:04:50.479 CXX test/cpp_headers/base64.o 00:04:50.479 CXX test/cpp_headers/bdev.o 00:04:50.742 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:50.742 LINK spdk_dd 00:04:50.742 CC examples/accel/perf/accel_perf.o 00:04:50.742 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:50.742 CXX test/cpp_headers/bdev_module.o 00:04:50.742 CC test/env/vtophys/vtophys.o 00:04:50.742 CC test/event/event_perf/event_perf.o 00:04:51.001 CC test/env/mem_callbacks/mem_callbacks.o 00:04:51.001 CXX test/cpp_headers/bdev_zone.o 00:04:51.001 LINK event_perf 00:04:51.001 LINK vtophys 00:04:51.001 LINK spdk_bdev 00:04:51.001 LINK spdk_nvme 00:04:51.001 CC test/event/reactor/reactor.o 00:04:51.260 CXX test/cpp_headers/bit_array.o 00:04:51.260 LINK vhost_fuzz 00:04:51.260 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:51.260 CC test/event/reactor_perf/reactor_perf.o 00:04:51.260 CC test/event/app_repeat/app_repeat.o 00:04:51.260 LINK reactor 00:04:51.260 CXX test/cpp_headers/bit_pool.o 00:04:51.260 LINK accel_perf 00:04:51.260 CC app/vhost/vhost.o 00:04:51.260 LINK mem_callbacks 00:04:51.542 LINK env_dpdk_post_init 00:04:51.542 LINK reactor_perf 00:04:51.542 LINK app_repeat 00:04:51.542 CXX test/cpp_headers/blob_bdev.o 00:04:51.542 CC test/event/scheduler/scheduler.o 00:04:51.542 LINK vhost 00:04:51.542 CC examples/blob/hello_world/hello_blob.o 00:04:51.542 CC examples/nvme/hello_world/hello_world.o 00:04:51.801 CXX test/cpp_headers/blobfs_bdev.o 00:04:51.801 CC test/env/memory/memory_ut.o 00:04:51.801 CC examples/blob/cli/blobcli.o 00:04:51.801 CC test/env/pci/pci_ut.o 00:04:51.801 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:51.801 LINK scheduler 00:04:51.801 LINK hello_blob 00:04:51.801 CXX test/cpp_headers/blobfs.o 00:04:51.801 CC test/nvme/aer/aer.o 00:04:51.801 LINK hello_world 00:04:52.060 LINK hello_fsdev 00:04:52.060 CC test/rpc_client/rpc_client_test.o 00:04:52.060 LINK iscsi_fuzz 00:04:52.060 CXX test/cpp_headers/blob.o 00:04:52.060 LINK pci_ut 00:04:52.060 CC examples/nvme/reconnect/reconnect.o 00:04:52.318 CXX test/cpp_headers/conf.o 00:04:52.318 LINK blobcli 00:04:52.318 LINK rpc_client_test 00:04:52.318 LINK aer 00:04:52.318 CC examples/bdev/hello_world/hello_bdev.o 00:04:52.318 CC examples/bdev/bdevperf/bdevperf.o 00:04:52.318 CXX test/cpp_headers/config.o 00:04:52.318 CXX test/cpp_headers/cpuset.o 00:04:52.578 CC test/accel/dif/dif.o 00:04:52.578 CC test/nvme/reset/reset.o 00:04:52.578 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:52.578 LINK hello_bdev 00:04:52.578 LINK reconnect 00:04:52.578 CXX test/cpp_headers/crc16.o 00:04:52.578 CC test/blobfs/mkfs/mkfs.o 00:04:52.578 CC test/lvol/esnap/esnap.o 00:04:52.837 CXX test/cpp_headers/crc32.o 00:04:52.837 LINK reset 00:04:52.837 LINK mkfs 00:04:52.837 CC examples/nvme/arbitration/arbitration.o 00:04:52.837 LINK memory_ut 00:04:52.837 CC examples/nvme/hotplug/hotplug.o 00:04:52.837 CXX test/cpp_headers/crc64.o 00:04:53.096 CC test/nvme/sgl/sgl.o 00:04:53.096 LINK nvme_manage 00:04:53.096 CXX test/cpp_headers/dif.o 00:04:53.096 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:53.096 LINK hotplug 00:04:53.096 CC test/nvme/e2edp/nvme_dp.o 00:04:53.096 LINK arbitration 00:04:53.096 LINK bdevperf 00:04:53.096 LINK dif 00:04:53.096 CXX test/cpp_headers/dma.o 00:04:53.096 CXX test/cpp_headers/endian.o 00:04:53.096 LINK cmb_copy 00:04:53.355 LINK sgl 00:04:53.355 CC test/nvme/overhead/overhead.o 00:04:53.355 CXX test/cpp_headers/env_dpdk.o 00:04:53.355 CXX test/cpp_headers/env.o 00:04:53.355 CC test/nvme/err_injection/err_injection.o 00:04:53.355 LINK nvme_dp 00:04:53.355 CC examples/nvme/abort/abort.o 00:04:53.355 CC test/nvme/startup/startup.o 00:04:53.355 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:53.614 CXX test/cpp_headers/event.o 00:04:53.614 LINK err_injection 00:04:53.614 CC test/nvme/reserve/reserve.o 00:04:53.614 CC test/nvme/simple_copy/simple_copy.o 00:04:53.614 LINK overhead 00:04:53.614 LINK startup 00:04:53.614 LINK pmr_persistence 00:04:53.614 CC test/nvme/connect_stress/connect_stress.o 00:04:53.614 CXX test/cpp_headers/fd_group.o 00:04:53.873 LINK reserve 00:04:53.873 CC test/nvme/boot_partition/boot_partition.o 00:04:53.873 CXX test/cpp_headers/fd.o 00:04:53.873 LINK abort 00:04:53.873 LINK simple_copy 00:04:53.873 CC test/nvme/compliance/nvme_compliance.o 00:04:53.873 LINK connect_stress 00:04:53.873 CC test/nvme/fused_ordering/fused_ordering.o 00:04:53.873 LINK boot_partition 00:04:54.131 CXX test/cpp_headers/file.o 00:04:54.131 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:54.131 CC test/bdev/bdevio/bdevio.o 00:04:54.131 LINK fused_ordering 00:04:54.131 CC test/nvme/fdp/fdp.o 00:04:54.131 CC test/nvme/cuse/cuse.o 00:04:54.131 CXX test/cpp_headers/fsdev.o 00:04:54.131 CXX test/cpp_headers/fsdev_module.o 00:04:54.131 CC examples/nvmf/nvmf/nvmf.o 00:04:54.131 LINK nvme_compliance 00:04:54.131 LINK doorbell_aers 00:04:54.131 CXX test/cpp_headers/ftl.o 00:04:54.390 CXX test/cpp_headers/fuse_dispatcher.o 00:04:54.390 CXX test/cpp_headers/gpt_spec.o 00:04:54.390 CXX test/cpp_headers/hexlify.o 00:04:54.390 CXX test/cpp_headers/histogram_data.o 00:04:54.390 CXX test/cpp_headers/idxd.o 00:04:54.390 LINK fdp 00:04:54.390 LINK bdevio 00:04:54.390 CXX test/cpp_headers/idxd_spec.o 00:04:54.390 CXX test/cpp_headers/init.o 00:04:54.656 LINK nvmf 00:04:54.656 CXX test/cpp_headers/ioat.o 00:04:54.656 CXX test/cpp_headers/ioat_spec.o 00:04:54.656 CXX test/cpp_headers/iscsi_spec.o 00:04:54.656 CXX test/cpp_headers/json.o 00:04:54.656 CXX test/cpp_headers/jsonrpc.o 00:04:54.656 CXX test/cpp_headers/keyring.o 00:04:54.656 CXX test/cpp_headers/keyring_module.o 00:04:54.656 CXX test/cpp_headers/likely.o 00:04:54.656 CXX test/cpp_headers/log.o 00:04:54.656 CXX test/cpp_headers/lvol.o 00:04:54.656 CXX test/cpp_headers/md5.o 00:04:54.656 CXX test/cpp_headers/memory.o 00:04:54.656 CXX test/cpp_headers/mmio.o 00:04:54.915 CXX test/cpp_headers/nbd.o 00:04:54.915 CXX test/cpp_headers/net.o 00:04:54.915 CXX test/cpp_headers/notify.o 00:04:54.915 CXX test/cpp_headers/nvme.o 00:04:54.915 CXX test/cpp_headers/nvme_intel.o 00:04:54.915 CXX test/cpp_headers/nvme_ocssd.o 00:04:54.915 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:54.915 CXX test/cpp_headers/nvme_spec.o 00:04:54.915 CXX test/cpp_headers/nvme_zns.o 00:04:54.915 CXX test/cpp_headers/nvmf_cmd.o 00:04:54.916 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:54.916 CXX test/cpp_headers/nvmf.o 00:04:54.916 CXX test/cpp_headers/nvmf_spec.o 00:04:54.916 CXX test/cpp_headers/nvmf_transport.o 00:04:55.175 CXX test/cpp_headers/opal.o 00:04:55.175 CXX test/cpp_headers/opal_spec.o 00:04:55.175 CXX test/cpp_headers/pci_ids.o 00:04:55.175 CXX test/cpp_headers/pipe.o 00:04:55.175 CXX test/cpp_headers/queue.o 00:04:55.175 CXX test/cpp_headers/reduce.o 00:04:55.175 CXX test/cpp_headers/rpc.o 00:04:55.175 CXX test/cpp_headers/scheduler.o 00:04:55.175 CXX test/cpp_headers/scsi.o 00:04:55.175 CXX test/cpp_headers/scsi_spec.o 00:04:55.175 CXX test/cpp_headers/sock.o 00:04:55.434 CXX test/cpp_headers/stdinc.o 00:04:55.434 CXX test/cpp_headers/string.o 00:04:55.434 CXX test/cpp_headers/thread.o 00:04:55.434 CXX test/cpp_headers/trace.o 00:04:55.434 CXX test/cpp_headers/trace_parser.o 00:04:55.434 CXX test/cpp_headers/tree.o 00:04:55.434 CXX test/cpp_headers/ublk.o 00:04:55.434 CXX test/cpp_headers/util.o 00:04:55.434 CXX test/cpp_headers/uuid.o 00:04:55.434 LINK cuse 00:04:55.434 CXX test/cpp_headers/version.o 00:04:55.434 CXX test/cpp_headers/vfio_user_pci.o 00:04:55.434 CXX test/cpp_headers/vfio_user_spec.o 00:04:55.434 CXX test/cpp_headers/vhost.o 00:04:55.434 CXX test/cpp_headers/vmd.o 00:04:55.434 CXX test/cpp_headers/xor.o 00:04:55.692 CXX test/cpp_headers/zipf.o 00:04:58.978 LINK esnap 00:04:58.978 00:04:58.978 real 1m20.963s 00:04:58.978 user 7m4.461s 00:04:58.978 sys 1m46.269s 00:04:58.978 ************************************ 00:04:58.978 END TEST make 00:04:58.978 ************************************ 00:04:58.978 17:54:27 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:58.978 17:54:27 make -- common/autotest_common.sh@10 -- $ set +x 00:04:58.978 17:54:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:58.978 17:54:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:58.978 17:54:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:58.978 17:54:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.978 17:54:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:58.978 17:54:28 -- pm/common@44 -- $ pid=5292 00:04:58.978 17:54:28 -- pm/common@50 -- $ kill -TERM 5292 00:04:58.978 17:54:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:58.978 17:54:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:58.978 17:54:28 -- pm/common@44 -- $ pid=5293 00:04:58.978 17:54:28 -- pm/common@50 -- $ kill -TERM 5293 00:04:58.978 17:54:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:58.978 17:54:28 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:58.978 17:54:28 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.978 17:54:28 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.978 17:54:28 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.978 17:54:28 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.978 17:54:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.978 17:54:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.978 17:54:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.978 17:54:28 -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.978 17:54:28 -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.978 17:54:28 -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.978 17:54:28 -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.978 17:54:28 -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.978 17:54:28 -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.978 17:54:28 -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.978 17:54:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.978 17:54:28 -- scripts/common.sh@344 -- # case "$op" in 00:04:58.978 17:54:28 -- scripts/common.sh@345 -- # : 1 00:04:58.978 17:54:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.978 17:54:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.978 17:54:28 -- scripts/common.sh@365 -- # decimal 1 00:04:58.978 17:54:28 -- scripts/common.sh@353 -- # local d=1 00:04:58.978 17:54:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.978 17:54:28 -- scripts/common.sh@355 -- # echo 1 00:04:58.978 17:54:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.978 17:54:28 -- scripts/common.sh@366 -- # decimal 2 00:04:58.978 17:54:28 -- scripts/common.sh@353 -- # local d=2 00:04:58.978 17:54:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.978 17:54:28 -- scripts/common.sh@355 -- # echo 2 00:04:58.978 17:54:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.978 17:54:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.978 17:54:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.978 17:54:28 -- scripts/common.sh@368 -- # return 0 00:04:58.978 17:54:28 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.978 17:54:28 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.978 --rc genhtml_branch_coverage=1 00:04:58.978 --rc genhtml_function_coverage=1 00:04:58.978 --rc genhtml_legend=1 00:04:58.978 --rc geninfo_all_blocks=1 00:04:58.978 --rc geninfo_unexecuted_blocks=1 00:04:58.978 00:04:58.978 ' 00:04:58.978 17:54:28 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.978 --rc genhtml_branch_coverage=1 00:04:58.978 --rc genhtml_function_coverage=1 00:04:58.978 --rc genhtml_legend=1 00:04:58.978 --rc geninfo_all_blocks=1 00:04:58.978 --rc geninfo_unexecuted_blocks=1 00:04:58.978 00:04:58.978 ' 00:04:58.978 17:54:28 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.978 --rc genhtml_branch_coverage=1 00:04:58.978 --rc genhtml_function_coverage=1 00:04:58.978 --rc genhtml_legend=1 00:04:58.978 --rc geninfo_all_blocks=1 00:04:58.978 --rc geninfo_unexecuted_blocks=1 00:04:58.978 00:04:58.978 ' 00:04:58.978 17:54:28 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.978 --rc genhtml_branch_coverage=1 00:04:58.978 --rc genhtml_function_coverage=1 00:04:58.978 --rc genhtml_legend=1 00:04:58.978 --rc geninfo_all_blocks=1 00:04:58.978 --rc geninfo_unexecuted_blocks=1 00:04:58.978 00:04:58.978 ' 00:04:58.978 17:54:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:58.978 17:54:28 -- nvmf/common.sh@7 -- # uname -s 00:04:58.978 17:54:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.978 17:54:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.978 17:54:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.978 17:54:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.978 17:54:28 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.978 17:54:28 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:04:58.978 17:54:28 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.978 17:54:28 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:04:58.978 17:54:28 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9149b43b-a128-4f4b-a4f1-526b0f9933e8 00:04:58.978 17:54:28 -- nvmf/common.sh@16 -- # NVME_HOSTID=9149b43b-a128-4f4b-a4f1-526b0f9933e8 00:04:58.978 17:54:28 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.979 17:54:28 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:04:58.979 17:54:28 -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:04:58.979 17:54:28 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.979 17:54:28 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:58.979 17:54:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.237 17:54:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.237 17:54:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.237 17:54:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.237 17:54:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.238 17:54:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.238 17:54:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.238 17:54:28 -- paths/export.sh@5 -- # export PATH 00:04:59.238 17:54:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.238 17:54:28 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:04:59.238 17:54:28 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:04:59.238 17:54:28 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:59.238 17:54:28 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:04:59.238 17:54:28 -- nvmf/common.sh@50 -- # : 0 00:04:59.238 17:54:28 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:04:59.238 17:54:28 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:04:59.238 17:54:28 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:04:59.238 17:54:28 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.238 17:54:28 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.238 17:54:28 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:04:59.238 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:04:59.238 17:54:28 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:04:59.238 17:54:28 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:04:59.238 17:54:28 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:04:59.238 17:54:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:59.238 17:54:28 -- spdk/autotest.sh@32 -- # uname -s 00:04:59.238 17:54:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:59.238 17:54:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:59.238 17:54:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:59.238 17:54:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:59.238 17:54:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:59.238 17:54:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:59.238 17:54:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:59.238 17:54:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:59.238 17:54:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:59.238 17:54:28 -- spdk/autotest.sh@48 -- # udevadm_pid=54712 00:04:59.238 17:54:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:59.238 17:54:28 -- pm/common@17 -- # local monitor 00:04:59.238 17:54:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.238 17:54:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.238 17:54:28 -- pm/common@21 -- # date +%s 00:04:59.238 17:54:28 -- pm/common@25 -- # sleep 1 00:04:59.238 17:54:28 -- pm/common@21 -- # date +%s 00:04:59.238 17:54:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730829268 00:04:59.238 17:54:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730829268 00:04:59.238 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730829268_collect-cpu-load.pm.log 00:04:59.238 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730829268_collect-vmstat.pm.log 00:05:00.173 17:54:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:00.173 17:54:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:00.173 17:54:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.173 17:54:29 -- common/autotest_common.sh@10 -- # set +x 00:05:00.173 17:54:29 -- spdk/autotest.sh@59 -- # create_test_list 00:05:00.173 17:54:29 -- common/autotest_common.sh@750 -- # xtrace_disable 00:05:00.173 17:54:29 -- common/autotest_common.sh@10 -- # set +x 00:05:00.173 17:54:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:00.173 17:54:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:00.173 17:54:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:00.173 17:54:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:00.173 17:54:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:00.173 17:54:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:00.173 17:54:29 -- common/autotest_common.sh@1455 -- # uname 00:05:00.173 17:54:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:00.173 17:54:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:00.173 17:54:29 -- common/autotest_common.sh@1475 -- # uname 00:05:00.173 17:54:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:00.173 17:54:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:00.173 17:54:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:00.432 lcov: LCOV version 1.15 00:05:00.432 17:54:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:15.315 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:15.315 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:30.191 17:54:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:30.191 17:54:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:30.191 17:54:59 -- common/autotest_common.sh@10 -- # set +x 00:05:30.191 17:54:59 -- spdk/autotest.sh@78 -- # rm -f 00:05:30.191 17:54:59 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.361 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:31.361 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:31.361 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:31.620 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:31.620 17:55:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:31.620 17:55:00 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:31.620 17:55:00 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:31.620 17:55:00 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:31.620 17:55:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.620 17:55:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:31.620 17:55:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:31.620 17:55:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:31.620 17:55:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.620 17:55:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.620 17:55:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:31.620 17:55:00 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:31.620 17:55:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:31.620 17:55:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.620 17:55:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.620 17:55:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:31.620 17:55:00 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:31.620 17:55:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:31.620 17:55:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.620 17:55:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.620 17:55:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:31.620 17:55:00 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:31.620 17:55:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:31.621 17:55:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.621 17:55:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.621 17:55:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:31.621 17:55:00 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:31.621 17:55:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:31.621 17:55:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.621 17:55:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.621 17:55:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:31.621 17:55:00 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:31.621 17:55:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:31.621 17:55:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.621 17:55:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.621 17:55:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:31.621 17:55:00 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:31.621 17:55:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:31.621 17:55:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.621 17:55:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:31.621 17:55:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.621 17:55:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.621 17:55:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:31.621 17:55:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:31.621 17:55:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:31.621 No valid GPT data, bailing 00:05:31.621 17:55:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:31.621 17:55:00 -- scripts/common.sh@394 -- # pt= 00:05:31.621 17:55:00 -- scripts/common.sh@395 -- # return 1 00:05:31.621 17:55:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:31.621 1+0 records in 00:05:31.621 1+0 records out 00:05:31.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0183967 s, 57.0 MB/s 00:05:31.621 17:55:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.621 17:55:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.621 17:55:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:31.621 17:55:00 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:31.621 17:55:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:31.621 No valid GPT data, bailing 00:05:31.621 17:55:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:31.621 17:55:00 -- scripts/common.sh@394 -- # pt= 00:05:31.621 17:55:00 -- scripts/common.sh@395 -- # return 1 00:05:31.621 17:55:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:31.621 1+0 records in 00:05:31.621 1+0 records out 00:05:31.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653451 s, 160 MB/s 00:05:31.621 17:55:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.621 17:55:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.621 17:55:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:31.621 17:55:00 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:31.621 17:55:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:31.880 No valid GPT data, bailing 00:05:31.880 17:55:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:31.880 17:55:00 -- scripts/common.sh@394 -- # pt= 00:05:31.880 17:55:00 -- scripts/common.sh@395 -- # return 1 00:05:31.880 17:55:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:31.880 1+0 records in 00:05:31.880 1+0 records out 00:05:31.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00628048 s, 167 MB/s 00:05:31.880 17:55:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.880 17:55:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.880 17:55:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:31.880 17:55:00 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:31.880 17:55:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:31.880 No valid GPT data, bailing 00:05:31.880 17:55:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:31.880 17:55:01 -- scripts/common.sh@394 -- # pt= 00:05:31.880 17:55:01 -- scripts/common.sh@395 -- # return 1 00:05:31.880 17:55:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:31.880 1+0 records in 00:05:31.880 1+0 records out 00:05:31.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00590204 s, 178 MB/s 00:05:31.880 17:55:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.880 17:55:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.880 17:55:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:31.880 17:55:01 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:31.880 17:55:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:31.880 No valid GPT data, bailing 00:05:31.880 17:55:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:31.880 17:55:01 -- scripts/common.sh@394 -- # pt= 00:05:31.880 17:55:01 -- scripts/common.sh@395 -- # return 1 00:05:31.880 17:55:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:31.880 1+0 records in 00:05:31.880 1+0 records out 00:05:31.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446552 s, 235 MB/s 00:05:31.880 17:55:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.880 17:55:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.880 17:55:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:31.880 17:55:01 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:31.880 17:55:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:31.880 No valid GPT data, bailing 00:05:31.880 17:55:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:32.140 17:55:01 -- scripts/common.sh@394 -- # pt= 00:05:32.140 17:55:01 -- scripts/common.sh@395 -- # return 1 00:05:32.140 17:55:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:32.140 1+0 records in 00:05:32.140 1+0 records out 00:05:32.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134411 s, 78.0 MB/s 00:05:32.140 17:55:01 -- spdk/autotest.sh@105 -- # sync 00:05:32.140 17:55:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:32.140 17:55:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:32.140 17:55:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:35.431 17:55:04 -- spdk/autotest.sh@111 -- # uname -s 00:05:35.431 17:55:04 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:35.431 17:55:04 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:35.431 17:55:04 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:35.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.999 Hugepages 00:05:35.999 node hugesize free / total 00:05:35.999 node0 1048576kB 0 / 0 00:05:35.999 node0 2048kB 0 / 0 00:05:35.999 00:05:35.999 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.259 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:36.259 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:36.259 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:36.518 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:36.518 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:36.518 17:55:05 -- spdk/autotest.sh@117 -- # uname -s 00:05:36.518 17:55:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:36.518 17:55:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:36.518 17:55:05 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.086 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.021 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.021 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.021 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.021 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.280 17:55:07 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:39.216 17:55:08 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:39.216 17:55:08 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:39.216 17:55:08 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:39.216 17:55:08 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:39.216 17:55:08 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:39.216 17:55:08 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:39.216 17:55:08 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:39.216 17:55:08 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:39.216 17:55:08 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:39.216 17:55:08 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:39.216 17:55:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:39.216 17:55:08 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.789 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.048 Waiting for block devices as requested 00:05:40.048 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:40.048 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:40.308 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:40.308 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:45.592 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:45.592 17:55:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:45.592 17:55:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:45.592 17:55:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:45.592 17:55:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:45.592 17:55:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:45.592 17:55:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:45.592 17:55:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:45.592 17:55:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:45.592 17:55:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:45.592 17:55:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:45.592 17:55:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:45.592 17:55:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1541 -- # continue 00:05:45.592 17:55:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:45.592 17:55:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:45.592 17:55:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:45.592 17:55:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:45.592 17:55:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:45.592 17:55:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:45.592 17:55:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:45.592 17:55:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:45.592 17:55:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:45.592 17:55:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:45.592 17:55:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:45.592 17:55:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1541 -- # continue 00:05:45.592 17:55:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:45.592 17:55:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:45.592 17:55:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:45.592 17:55:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:45.592 17:55:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:45.592 17:55:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:45.592 17:55:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:45.592 17:55:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:45.592 17:55:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:45.592 17:55:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:45.592 17:55:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:45.592 17:55:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1541 -- # continue 00:05:45.592 17:55:14 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:45.592 17:55:14 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:45.592 17:55:14 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:45.592 17:55:14 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:45.592 17:55:14 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:45.592 17:55:14 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:45.592 17:55:14 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:45.592 17:55:14 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:45.592 17:55:14 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:45.592 17:55:14 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:45.592 17:55:14 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:45.592 17:55:14 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:45.592 17:55:14 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:45.592 17:55:14 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:45.592 17:55:14 -- common/autotest_common.sh@1541 -- # continue 00:05:45.592 17:55:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:45.592 17:55:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:45.592 17:55:14 -- common/autotest_common.sh@10 -- # set +x 00:05:45.593 17:55:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:45.593 17:55:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:45.593 17:55:14 -- common/autotest_common.sh@10 -- # set +x 00:05:45.593 17:55:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.095 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.095 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.095 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.095 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.095 17:55:16 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:47.095 17:55:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:47.095 17:55:16 -- common/autotest_common.sh@10 -- # set +x 00:05:47.095 17:55:16 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:47.095 17:55:16 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:47.095 17:55:16 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:47.095 17:55:16 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:47.096 17:55:16 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:47.096 17:55:16 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:47.096 17:55:16 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:47.096 17:55:16 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:47.096 17:55:16 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:47.096 17:55:16 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:47.096 17:55:16 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:47.096 17:55:16 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:47.096 17:55:16 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:47.355 17:55:16 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:47.355 17:55:16 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:47.355 17:55:16 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:47.355 17:55:16 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:47.355 17:55:16 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:47.355 17:55:16 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.355 17:55:16 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:47.355 17:55:16 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:47.355 17:55:16 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:47.355 17:55:16 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.355 17:55:16 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:47.355 17:55:16 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:47.355 17:55:16 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:47.355 17:55:16 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.355 17:55:16 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:47.355 17:55:16 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:47.355 17:55:16 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:47.355 17:55:16 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:47.355 17:55:16 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:47.355 17:55:16 -- common/autotest_common.sh@1570 -- # return 0 00:05:47.355 17:55:16 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:47.355 17:55:16 -- common/autotest_common.sh@1578 -- # return 0 00:05:47.355 17:55:16 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:47.355 17:55:16 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:47.355 17:55:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:47.355 17:55:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:47.355 17:55:16 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:47.355 17:55:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:47.355 17:55:16 -- common/autotest_common.sh@10 -- # set +x 00:05:47.355 17:55:16 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:47.355 17:55:16 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:47.355 17:55:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.355 17:55:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.355 17:55:16 -- common/autotest_common.sh@10 -- # set +x 00:05:47.355 ************************************ 00:05:47.355 START TEST env 00:05:47.355 ************************************ 00:05:47.355 17:55:16 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:47.355 * Looking for test storage... 00:05:47.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:47.355 17:55:16 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:47.355 17:55:16 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:47.355 17:55:16 env -- common/autotest_common.sh@1691 -- # lcov --version 00:05:47.614 17:55:16 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:47.614 17:55:16 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.614 17:55:16 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.614 17:55:16 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.614 17:55:16 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.614 17:55:16 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.614 17:55:16 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.614 17:55:16 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.614 17:55:16 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.614 17:55:16 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.614 17:55:16 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.614 17:55:16 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.614 17:55:16 env -- scripts/common.sh@344 -- # case "$op" in 00:05:47.614 17:55:16 env -- scripts/common.sh@345 -- # : 1 00:05:47.614 17:55:16 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.614 17:55:16 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.614 17:55:16 env -- scripts/common.sh@365 -- # decimal 1 00:05:47.614 17:55:16 env -- scripts/common.sh@353 -- # local d=1 00:05:47.614 17:55:16 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.614 17:55:16 env -- scripts/common.sh@355 -- # echo 1 00:05:47.614 17:55:16 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.614 17:55:16 env -- scripts/common.sh@366 -- # decimal 2 00:05:47.614 17:55:16 env -- scripts/common.sh@353 -- # local d=2 00:05:47.614 17:55:16 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.614 17:55:16 env -- scripts/common.sh@355 -- # echo 2 00:05:47.614 17:55:16 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.614 17:55:16 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.614 17:55:16 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.614 17:55:16 env -- scripts/common.sh@368 -- # return 0 00:05:47.614 17:55:16 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.614 17:55:16 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:47.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.614 --rc genhtml_branch_coverage=1 00:05:47.614 --rc genhtml_function_coverage=1 00:05:47.614 --rc genhtml_legend=1 00:05:47.615 --rc geninfo_all_blocks=1 00:05:47.615 --rc geninfo_unexecuted_blocks=1 00:05:47.615 00:05:47.615 ' 00:05:47.615 17:55:16 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:47.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.615 --rc genhtml_branch_coverage=1 00:05:47.615 --rc genhtml_function_coverage=1 00:05:47.615 --rc genhtml_legend=1 00:05:47.615 --rc geninfo_all_blocks=1 00:05:47.615 --rc geninfo_unexecuted_blocks=1 00:05:47.615 00:05:47.615 ' 00:05:47.615 17:55:16 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:47.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.615 --rc genhtml_branch_coverage=1 00:05:47.615 --rc genhtml_function_coverage=1 00:05:47.615 --rc genhtml_legend=1 00:05:47.615 --rc geninfo_all_blocks=1 00:05:47.615 --rc geninfo_unexecuted_blocks=1 00:05:47.615 00:05:47.615 ' 00:05:47.615 17:55:16 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:47.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.615 --rc genhtml_branch_coverage=1 00:05:47.615 --rc genhtml_function_coverage=1 00:05:47.615 --rc genhtml_legend=1 00:05:47.615 --rc geninfo_all_blocks=1 00:05:47.615 --rc geninfo_unexecuted_blocks=1 00:05:47.615 00:05:47.615 ' 00:05:47.615 17:55:16 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:47.615 17:55:16 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.615 17:55:16 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.615 17:55:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.615 ************************************ 00:05:47.615 START TEST env_memory 00:05:47.615 ************************************ 00:05:47.615 17:55:16 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:47.615 00:05:47.615 00:05:47.615 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.615 http://cunit.sourceforge.net/ 00:05:47.615 00:05:47.615 00:05:47.615 Suite: memory 00:05:47.615 Test: alloc and free memory map ...[2024-11-05 17:55:16.809220] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:47.615 passed 00:05:47.615 Test: mem map translation ...[2024-11-05 17:55:16.877185] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:47.615 [2024-11-05 17:55:16.877310] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:47.615 [2024-11-05 17:55:16.877420] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:47.615 [2024-11-05 17:55:16.877457] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:47.872 passed 00:05:47.872 Test: mem map registration ...[2024-11-05 17:55:16.956461] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:47.872 [2024-11-05 17:55:16.956550] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:47.872 passed 00:05:47.872 Test: mem map adjacent registrations ...passed 00:05:47.872 00:05:47.873 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.873 suites 1 1 n/a 0 0 00:05:47.873 tests 4 4 4 0 0 00:05:47.873 asserts 152 152 152 0 n/a 00:05:47.873 00:05:47.873 Elapsed time = 0.286 seconds 00:05:47.873 00:05:47.873 real 0m0.333s 00:05:47.873 user 0m0.291s 00:05:47.873 sys 0m0.035s 00:05:47.873 17:55:17 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.873 17:55:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:47.873 ************************************ 00:05:47.873 END TEST env_memory 00:05:47.873 ************************************ 00:05:47.873 17:55:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:47.873 17:55:17 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.873 17:55:17 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.873 17:55:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.873 ************************************ 00:05:47.873 START TEST env_vtophys 00:05:47.873 ************************************ 00:05:47.873 17:55:17 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:47.873 EAL: lib.eal log level changed from notice to debug 00:05:47.873 EAL: Detected lcore 0 as core 0 on socket 0 00:05:47.873 EAL: Detected lcore 1 as core 0 on socket 0 00:05:47.873 EAL: Detected lcore 2 as core 0 on socket 0 00:05:47.873 EAL: Detected lcore 3 as core 0 on socket 0 00:05:47.873 EAL: Detected lcore 4 as core 0 on socket 0 00:05:47.873 EAL: Detected lcore 5 as core 0 on socket 0 00:05:47.873 EAL: Detected lcore 6 as core 0 on socket 0 00:05:47.873 EAL: Detected lcore 7 as core 0 on socket 0 00:05:47.873 EAL: Detected lcore 8 as core 0 on socket 0 00:05:47.873 EAL: Detected lcore 9 as core 0 on socket 0 00:05:48.151 EAL: Maximum logical cores by configuration: 128 00:05:48.151 EAL: Detected CPU lcores: 10 00:05:48.151 EAL: Detected NUMA nodes: 1 00:05:48.151 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:48.151 EAL: Detected shared linkage of DPDK 00:05:48.151 EAL: No shared files mode enabled, IPC will be disabled 00:05:48.151 EAL: Selected IOVA mode 'PA' 00:05:48.151 EAL: Probing VFIO support... 00:05:48.151 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:48.151 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:48.151 EAL: Ask a virtual area of 0x2e000 bytes 00:05:48.151 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:48.151 EAL: Setting up physically contiguous memory... 00:05:48.151 EAL: Setting maximum number of open files to 524288 00:05:48.151 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:48.151 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:48.151 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.151 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:48.151 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.151 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.151 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:48.151 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:48.151 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.151 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:48.151 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.151 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.151 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:48.151 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:48.151 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.151 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:48.151 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.151 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.151 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:48.151 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:48.151 EAL: Ask a virtual area of 0x61000 bytes 00:05:48.151 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:48.151 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:48.151 EAL: Ask a virtual area of 0x400000000 bytes 00:05:48.151 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:48.151 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:48.151 EAL: Hugepages will be freed exactly as allocated. 00:05:48.151 EAL: No shared files mode enabled, IPC is disabled 00:05:48.151 EAL: No shared files mode enabled, IPC is disabled 00:05:48.151 EAL: TSC frequency is ~2490000 KHz 00:05:48.151 EAL: Main lcore 0 is ready (tid=7fc840f23a40;cpuset=[0]) 00:05:48.151 EAL: Trying to obtain current memory policy. 00:05:48.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.151 EAL: Restoring previous memory policy: 0 00:05:48.151 EAL: request: mp_malloc_sync 00:05:48.151 EAL: No shared files mode enabled, IPC is disabled 00:05:48.151 EAL: Heap on socket 0 was expanded by 2MB 00:05:48.151 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:48.151 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:48.151 EAL: Mem event callback 'spdk:(nil)' registered 00:05:48.151 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:48.151 00:05:48.151 00:05:48.151 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.151 http://cunit.sourceforge.net/ 00:05:48.151 00:05:48.151 00:05:48.151 Suite: components_suite 00:05:48.718 Test: vtophys_malloc_test ...passed 00:05:48.718 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:48.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.718 EAL: Restoring previous memory policy: 4 00:05:48.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.718 EAL: request: mp_malloc_sync 00:05:48.718 EAL: No shared files mode enabled, IPC is disabled 00:05:48.718 EAL: Heap on socket 0 was expanded by 4MB 00:05:48.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.718 EAL: request: mp_malloc_sync 00:05:48.718 EAL: No shared files mode enabled, IPC is disabled 00:05:48.718 EAL: Heap on socket 0 was shrunk by 4MB 00:05:48.718 EAL: Trying to obtain current memory policy. 00:05:48.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.718 EAL: Restoring previous memory policy: 4 00:05:48.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.718 EAL: request: mp_malloc_sync 00:05:48.718 EAL: No shared files mode enabled, IPC is disabled 00:05:48.718 EAL: Heap on socket 0 was expanded by 6MB 00:05:48.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.718 EAL: request: mp_malloc_sync 00:05:48.718 EAL: No shared files mode enabled, IPC is disabled 00:05:48.718 EAL: Heap on socket 0 was shrunk by 6MB 00:05:48.718 EAL: Trying to obtain current memory policy. 00:05:48.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.718 EAL: Restoring previous memory policy: 4 00:05:48.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.718 EAL: request: mp_malloc_sync 00:05:48.718 EAL: No shared files mode enabled, IPC is disabled 00:05:48.718 EAL: Heap on socket 0 was expanded by 10MB 00:05:48.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.718 EAL: request: mp_malloc_sync 00:05:48.718 EAL: No shared files mode enabled, IPC is disabled 00:05:48.718 EAL: Heap on socket 0 was shrunk by 10MB 00:05:48.718 EAL: Trying to obtain current memory policy. 00:05:48.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.718 EAL: Restoring previous memory policy: 4 00:05:48.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.718 EAL: request: mp_malloc_sync 00:05:48.718 EAL: No shared files mode enabled, IPC is disabled 00:05:48.718 EAL: Heap on socket 0 was expanded by 18MB 00:05:48.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.718 EAL: request: mp_malloc_sync 00:05:48.718 EAL: No shared files mode enabled, IPC is disabled 00:05:48.718 EAL: Heap on socket 0 was shrunk by 18MB 00:05:48.718 EAL: Trying to obtain current memory policy. 00:05:48.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.718 EAL: Restoring previous memory policy: 4 00:05:48.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.718 EAL: request: mp_malloc_sync 00:05:48.718 EAL: No shared files mode enabled, IPC is disabled 00:05:48.718 EAL: Heap on socket 0 was expanded by 34MB 00:05:48.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.718 EAL: request: mp_malloc_sync 00:05:48.718 EAL: No shared files mode enabled, IPC is disabled 00:05:48.718 EAL: Heap on socket 0 was shrunk by 34MB 00:05:48.977 EAL: Trying to obtain current memory policy. 00:05:48.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.977 EAL: Restoring previous memory policy: 4 00:05:48.977 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.977 EAL: request: mp_malloc_sync 00:05:48.977 EAL: No shared files mode enabled, IPC is disabled 00:05:48.977 EAL: Heap on socket 0 was expanded by 66MB 00:05:48.977 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.977 EAL: request: mp_malloc_sync 00:05:48.977 EAL: No shared files mode enabled, IPC is disabled 00:05:48.977 EAL: Heap on socket 0 was shrunk by 66MB 00:05:49.236 EAL: Trying to obtain current memory policy. 00:05:49.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.236 EAL: Restoring previous memory policy: 4 00:05:49.236 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.236 EAL: request: mp_malloc_sync 00:05:49.236 EAL: No shared files mode enabled, IPC is disabled 00:05:49.236 EAL: Heap on socket 0 was expanded by 130MB 00:05:49.495 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.495 EAL: request: mp_malloc_sync 00:05:49.495 EAL: No shared files mode enabled, IPC is disabled 00:05:49.495 EAL: Heap on socket 0 was shrunk by 130MB 00:05:49.495 EAL: Trying to obtain current memory policy. 00:05:49.495 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:49.754 EAL: Restoring previous memory policy: 4 00:05:49.754 EAL: Calling mem event callback 'spdk:(nil)' 00:05:49.754 EAL: request: mp_malloc_sync 00:05:49.754 EAL: No shared files mode enabled, IPC is disabled 00:05:49.754 EAL: Heap on socket 0 was expanded by 258MB 00:05:50.013 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.272 EAL: request: mp_malloc_sync 00:05:50.272 EAL: No shared files mode enabled, IPC is disabled 00:05:50.272 EAL: Heap on socket 0 was shrunk by 258MB 00:05:50.536 EAL: Trying to obtain current memory policy. 00:05:50.536 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:50.799 EAL: Restoring previous memory policy: 4 00:05:50.799 EAL: Calling mem event callback 'spdk:(nil)' 00:05:50.799 EAL: request: mp_malloc_sync 00:05:50.799 EAL: No shared files mode enabled, IPC is disabled 00:05:50.799 EAL: Heap on socket 0 was expanded by 514MB 00:05:51.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:51.736 EAL: request: mp_malloc_sync 00:05:51.736 EAL: No shared files mode enabled, IPC is disabled 00:05:51.736 EAL: Heap on socket 0 was shrunk by 514MB 00:05:52.673 EAL: Trying to obtain current memory policy. 00:05:52.673 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.673 EAL: Restoring previous memory policy: 4 00:05:52.673 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.673 EAL: request: mp_malloc_sync 00:05:52.673 EAL: No shared files mode enabled, IPC is disabled 00:05:52.673 EAL: Heap on socket 0 was expanded by 1026MB 00:05:54.579 EAL: Calling mem event callback 'spdk:(nil)' 00:05:54.579 EAL: request: mp_malloc_sync 00:05:54.579 EAL: No shared files mode enabled, IPC is disabled 00:05:54.579 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:56.486 passed 00:05:56.486 00:05:56.486 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.486 suites 1 1 n/a 0 0 00:05:56.486 tests 2 2 2 0 0 00:05:56.486 asserts 5894 5894 5894 0 n/a 00:05:56.486 00:05:56.486 Elapsed time = 8.034 seconds 00:05:56.486 EAL: Calling mem event callback 'spdk:(nil)' 00:05:56.486 EAL: request: mp_malloc_sync 00:05:56.486 EAL: No shared files mode enabled, IPC is disabled 00:05:56.486 EAL: Heap on socket 0 was shrunk by 2MB 00:05:56.486 EAL: No shared files mode enabled, IPC is disabled 00:05:56.486 EAL: No shared files mode enabled, IPC is disabled 00:05:56.486 EAL: No shared files mode enabled, IPC is disabled 00:05:56.486 00:05:56.486 real 0m8.401s 00:05:56.486 user 0m7.352s 00:05:56.486 sys 0m0.883s 00:05:56.486 17:55:25 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.486 17:55:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:56.486 ************************************ 00:05:56.486 END TEST env_vtophys 00:05:56.486 ************************************ 00:05:56.486 17:55:25 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:56.486 17:55:25 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.486 17:55:25 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.486 17:55:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.486 ************************************ 00:05:56.486 START TEST env_pci 00:05:56.486 ************************************ 00:05:56.486 17:55:25 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:56.486 00:05:56.486 00:05:56.486 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.486 http://cunit.sourceforge.net/ 00:05:56.486 00:05:56.486 00:05:56.486 Suite: pci 00:05:56.486 Test: pci_hook ...[2024-11-05 17:55:25.648628] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57562 has claimed it 00:05:56.486 passed 00:05:56.486 00:05:56.486 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.486 suites 1 1 n/a 0 0 00:05:56.486 tests 1 1 1 0 0 00:05:56.486 asserts 25 25 25 0 n/a 00:05:56.486 00:05:56.486 Elapsed time = 0.011 seconds 00:05:56.486 EAL: Cannot find device (10000:00:01.0) 00:05:56.486 EAL: Failed to attach device on primary process 00:05:56.486 00:05:56.486 real 0m0.112s 00:05:56.486 user 0m0.045s 00:05:56.486 sys 0m0.066s 00:05:56.486 17:55:25 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.486 17:55:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:56.486 ************************************ 00:05:56.486 END TEST env_pci 00:05:56.486 ************************************ 00:05:56.486 17:55:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:56.486 17:55:25 env -- env/env.sh@15 -- # uname 00:05:56.486 17:55:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:56.486 17:55:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:56.486 17:55:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:56.486 17:55:25 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:56.486 17:55:25 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.486 17:55:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.486 ************************************ 00:05:56.486 START TEST env_dpdk_post_init 00:05:56.486 ************************************ 00:05:56.486 17:55:25 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:56.746 EAL: Detected CPU lcores: 10 00:05:56.746 EAL: Detected NUMA nodes: 1 00:05:56.746 EAL: Detected shared linkage of DPDK 00:05:56.746 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:56.746 EAL: Selected IOVA mode 'PA' 00:05:56.746 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:56.746 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:56.746 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:56.746 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:56.746 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:57.005 Starting DPDK initialization... 00:05:57.005 Starting SPDK post initialization... 00:05:57.005 SPDK NVMe probe 00:05:57.005 Attaching to 0000:00:10.0 00:05:57.005 Attaching to 0000:00:11.0 00:05:57.005 Attaching to 0000:00:12.0 00:05:57.005 Attaching to 0000:00:13.0 00:05:57.005 Attached to 0000:00:10.0 00:05:57.005 Attached to 0000:00:11.0 00:05:57.005 Attached to 0000:00:13.0 00:05:57.005 Attached to 0000:00:12.0 00:05:57.005 Cleaning up... 00:05:57.005 00:05:57.005 real 0m0.323s 00:05:57.005 user 0m0.106s 00:05:57.005 sys 0m0.120s 00:05:57.005 17:55:26 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.005 ************************************ 00:05:57.005 END TEST env_dpdk_post_init 00:05:57.005 ************************************ 00:05:57.005 17:55:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:57.005 17:55:26 env -- env/env.sh@26 -- # uname 00:05:57.005 17:55:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:57.005 17:55:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:57.005 17:55:26 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:57.005 17:55:26 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.005 17:55:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.005 ************************************ 00:05:57.005 START TEST env_mem_callbacks 00:05:57.005 ************************************ 00:05:57.005 17:55:26 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:57.005 EAL: Detected CPU lcores: 10 00:05:57.005 EAL: Detected NUMA nodes: 1 00:05:57.005 EAL: Detected shared linkage of DPDK 00:05:57.005 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:57.005 EAL: Selected IOVA mode 'PA' 00:05:57.265 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:57.265 00:05:57.265 00:05:57.265 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.265 http://cunit.sourceforge.net/ 00:05:57.265 00:05:57.265 00:05:57.265 Suite: memory 00:05:57.265 Test: test ... 00:05:57.265 register 0x200000200000 2097152 00:05:57.265 malloc 3145728 00:05:57.265 register 0x200000400000 4194304 00:05:57.265 buf 0x2000004fffc0 len 3145728 PASSED 00:05:57.265 malloc 64 00:05:57.265 buf 0x2000004ffec0 len 64 PASSED 00:05:57.265 malloc 4194304 00:05:57.265 register 0x200000800000 6291456 00:05:57.265 buf 0x2000009fffc0 len 4194304 PASSED 00:05:57.265 free 0x2000004fffc0 3145728 00:05:57.265 free 0x2000004ffec0 64 00:05:57.265 unregister 0x200000400000 4194304 PASSED 00:05:57.265 free 0x2000009fffc0 4194304 00:05:57.265 unregister 0x200000800000 6291456 PASSED 00:05:57.265 malloc 8388608 00:05:57.265 register 0x200000400000 10485760 00:05:57.265 buf 0x2000005fffc0 len 8388608 PASSED 00:05:57.265 free 0x2000005fffc0 8388608 00:05:57.265 unregister 0x200000400000 10485760 PASSED 00:05:57.265 passed 00:05:57.265 00:05:57.265 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.265 suites 1 1 n/a 0 0 00:05:57.265 tests 1 1 1 0 0 00:05:57.265 asserts 15 15 15 0 n/a 00:05:57.265 00:05:57.265 Elapsed time = 0.084 seconds 00:05:57.265 00:05:57.265 real 0m0.299s 00:05:57.265 user 0m0.113s 00:05:57.265 sys 0m0.084s 00:05:57.265 ************************************ 00:05:57.265 END TEST env_mem_callbacks 00:05:57.265 ************************************ 00:05:57.265 17:55:26 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.265 17:55:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:57.265 00:05:57.265 real 0m10.016s 00:05:57.265 user 0m8.125s 00:05:57.265 sys 0m1.530s 00:05:57.265 17:55:26 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.265 17:55:26 env -- common/autotest_common.sh@10 -- # set +x 00:05:57.265 ************************************ 00:05:57.265 END TEST env 00:05:57.265 ************************************ 00:05:57.265 17:55:26 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:57.265 17:55:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:57.265 17:55:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.265 17:55:26 -- common/autotest_common.sh@10 -- # set +x 00:05:57.524 ************************************ 00:05:57.524 START TEST rpc 00:05:57.524 ************************************ 00:05:57.524 17:55:26 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:57.524 * Looking for test storage... 00:05:57.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:57.524 17:55:26 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:57.524 17:55:26 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:57.524 17:55:26 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:57.524 17:55:26 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:57.524 17:55:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.524 17:55:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.524 17:55:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.524 17:55:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.524 17:55:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.524 17:55:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.524 17:55:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.524 17:55:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.524 17:55:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.524 17:55:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.524 17:55:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.524 17:55:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:57.524 17:55:26 rpc -- scripts/common.sh@345 -- # : 1 00:05:57.525 17:55:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.525 17:55:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.525 17:55:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:57.525 17:55:26 rpc -- scripts/common.sh@353 -- # local d=1 00:05:57.525 17:55:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.525 17:55:26 rpc -- scripts/common.sh@355 -- # echo 1 00:05:57.525 17:55:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.525 17:55:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:57.525 17:55:26 rpc -- scripts/common.sh@353 -- # local d=2 00:05:57.525 17:55:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.525 17:55:26 rpc -- scripts/common.sh@355 -- # echo 2 00:05:57.525 17:55:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.525 17:55:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.525 17:55:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.525 17:55:26 rpc -- scripts/common.sh@368 -- # return 0 00:05:57.525 17:55:26 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.525 17:55:26 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:57.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.525 --rc genhtml_branch_coverage=1 00:05:57.525 --rc genhtml_function_coverage=1 00:05:57.525 --rc genhtml_legend=1 00:05:57.525 --rc geninfo_all_blocks=1 00:05:57.525 --rc geninfo_unexecuted_blocks=1 00:05:57.525 00:05:57.525 ' 00:05:57.525 17:55:26 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:57.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.525 --rc genhtml_branch_coverage=1 00:05:57.525 --rc genhtml_function_coverage=1 00:05:57.525 --rc genhtml_legend=1 00:05:57.525 --rc geninfo_all_blocks=1 00:05:57.525 --rc geninfo_unexecuted_blocks=1 00:05:57.525 00:05:57.525 ' 00:05:57.525 17:55:26 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:57.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.525 --rc genhtml_branch_coverage=1 00:05:57.525 --rc genhtml_function_coverage=1 00:05:57.525 --rc genhtml_legend=1 00:05:57.525 --rc geninfo_all_blocks=1 00:05:57.525 --rc geninfo_unexecuted_blocks=1 00:05:57.525 00:05:57.525 ' 00:05:57.525 17:55:26 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:57.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.525 --rc genhtml_branch_coverage=1 00:05:57.525 --rc genhtml_function_coverage=1 00:05:57.525 --rc genhtml_legend=1 00:05:57.525 --rc geninfo_all_blocks=1 00:05:57.525 --rc geninfo_unexecuted_blocks=1 00:05:57.525 00:05:57.525 ' 00:05:57.525 17:55:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57691 00:05:57.525 17:55:26 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:57.525 17:55:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.525 17:55:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57691 00:05:57.525 17:55:26 rpc -- common/autotest_common.sh@833 -- # '[' -z 57691 ']' 00:05:57.525 17:55:26 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.525 17:55:26 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:57.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.525 17:55:26 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.525 17:55:26 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:57.525 17:55:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.784 [2024-11-05 17:55:26.931360] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:05:57.784 [2024-11-05 17:55:26.931886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57691 ] 00:05:58.043 [2024-11-05 17:55:27.115329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.043 [2024-11-05 17:55:27.228542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:58.043 [2024-11-05 17:55:27.228605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57691' to capture a snapshot of events at runtime. 00:05:58.043 [2024-11-05 17:55:27.228619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:58.043 [2024-11-05 17:55:27.228633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:58.043 [2024-11-05 17:55:27.228643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57691 for offline analysis/debug. 00:05:58.043 [2024-11-05 17:55:27.229907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.983 17:55:28 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:58.983 17:55:28 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:58.983 17:55:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:58.983 17:55:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:58.983 17:55:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:58.983 17:55:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:58.983 17:55:28 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.983 17:55:28 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.983 17:55:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.983 ************************************ 00:05:58.983 START TEST rpc_integrity 00:05:58.983 ************************************ 00:05:58.983 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:58.983 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:58.983 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.983 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.983 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.983 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:58.983 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:58.983 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:58.983 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:58.983 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.983 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.983 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.983 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:58.983 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:58.983 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.983 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.983 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.983 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:58.983 { 00:05:58.983 "name": "Malloc0", 00:05:58.983 "aliases": [ 00:05:58.983 "692ef8d9-ab47-44d7-84b4-2e01963144ee" 00:05:58.983 ], 00:05:58.983 "product_name": "Malloc disk", 00:05:58.983 "block_size": 512, 00:05:58.983 "num_blocks": 16384, 00:05:58.983 "uuid": "692ef8d9-ab47-44d7-84b4-2e01963144ee", 00:05:58.983 "assigned_rate_limits": { 00:05:58.983 "rw_ios_per_sec": 0, 00:05:58.983 "rw_mbytes_per_sec": 0, 00:05:58.983 "r_mbytes_per_sec": 0, 00:05:58.983 "w_mbytes_per_sec": 0 00:05:58.983 }, 00:05:58.983 "claimed": false, 00:05:58.983 "zoned": false, 00:05:58.983 "supported_io_types": { 00:05:58.983 "read": true, 00:05:58.983 "write": true, 00:05:58.983 "unmap": true, 00:05:58.983 "flush": true, 00:05:58.983 "reset": true, 00:05:58.983 "nvme_admin": false, 00:05:58.983 "nvme_io": false, 00:05:58.983 "nvme_io_md": false, 00:05:58.983 "write_zeroes": true, 00:05:58.983 "zcopy": true, 00:05:58.983 "get_zone_info": false, 00:05:58.983 "zone_management": false, 00:05:58.983 "zone_append": false, 00:05:58.983 "compare": false, 00:05:58.983 "compare_and_write": false, 00:05:58.983 "abort": true, 00:05:58.983 "seek_hole": false, 00:05:58.983 "seek_data": false, 00:05:58.983 "copy": true, 00:05:58.983 "nvme_iov_md": false 00:05:58.983 }, 00:05:58.983 "memory_domains": [ 00:05:58.983 { 00:05:58.983 "dma_device_id": "system", 00:05:58.983 "dma_device_type": 1 00:05:58.983 }, 00:05:58.983 { 00:05:58.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:58.983 "dma_device_type": 2 00:05:58.983 } 00:05:58.983 ], 00:05:58.983 "driver_specific": {} 00:05:58.983 } 00:05:58.983 ]' 00:05:58.983 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:58.983 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:58.983 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:58.983 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.983 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:58.983 [2024-11-05 17:55:28.285175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:58.983 [2024-11-05 17:55:28.285238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:58.983 [2024-11-05 17:55:28.285270] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:58.983 [2024-11-05 17:55:28.285285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:58.983 [2024-11-05 17:55:28.287673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:58.983 [2024-11-05 17:55:28.287721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:58.984 Passthru0 00:05:58.984 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.984 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:58.984 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.984 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.243 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.243 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:59.243 { 00:05:59.243 "name": "Malloc0", 00:05:59.243 "aliases": [ 00:05:59.243 "692ef8d9-ab47-44d7-84b4-2e01963144ee" 00:05:59.243 ], 00:05:59.243 "product_name": "Malloc disk", 00:05:59.243 "block_size": 512, 00:05:59.243 "num_blocks": 16384, 00:05:59.243 "uuid": "692ef8d9-ab47-44d7-84b4-2e01963144ee", 00:05:59.243 "assigned_rate_limits": { 00:05:59.243 "rw_ios_per_sec": 0, 00:05:59.243 "rw_mbytes_per_sec": 0, 00:05:59.243 "r_mbytes_per_sec": 0, 00:05:59.243 "w_mbytes_per_sec": 0 00:05:59.243 }, 00:05:59.243 "claimed": true, 00:05:59.243 "claim_type": "exclusive_write", 00:05:59.243 "zoned": false, 00:05:59.243 "supported_io_types": { 00:05:59.243 "read": true, 00:05:59.243 "write": true, 00:05:59.243 "unmap": true, 00:05:59.243 "flush": true, 00:05:59.243 "reset": true, 00:05:59.243 "nvme_admin": false, 00:05:59.243 "nvme_io": false, 00:05:59.243 "nvme_io_md": false, 00:05:59.243 "write_zeroes": true, 00:05:59.243 "zcopy": true, 00:05:59.243 "get_zone_info": false, 00:05:59.243 "zone_management": false, 00:05:59.243 "zone_append": false, 00:05:59.243 "compare": false, 00:05:59.243 "compare_and_write": false, 00:05:59.243 "abort": true, 00:05:59.243 "seek_hole": false, 00:05:59.243 "seek_data": false, 00:05:59.243 "copy": true, 00:05:59.243 "nvme_iov_md": false 00:05:59.243 }, 00:05:59.243 "memory_domains": [ 00:05:59.243 { 00:05:59.243 "dma_device_id": "system", 00:05:59.243 "dma_device_type": 1 00:05:59.243 }, 00:05:59.243 { 00:05:59.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.243 "dma_device_type": 2 00:05:59.244 } 00:05:59.244 ], 00:05:59.244 "driver_specific": {} 00:05:59.244 }, 00:05:59.244 { 00:05:59.244 "name": "Passthru0", 00:05:59.244 "aliases": [ 00:05:59.244 "50621eed-8f47-5228-aa34-af1bc8db8e02" 00:05:59.244 ], 00:05:59.244 "product_name": "passthru", 00:05:59.244 "block_size": 512, 00:05:59.244 "num_blocks": 16384, 00:05:59.244 "uuid": "50621eed-8f47-5228-aa34-af1bc8db8e02", 00:05:59.244 "assigned_rate_limits": { 00:05:59.244 "rw_ios_per_sec": 0, 00:05:59.244 "rw_mbytes_per_sec": 0, 00:05:59.244 "r_mbytes_per_sec": 0, 00:05:59.244 "w_mbytes_per_sec": 0 00:05:59.244 }, 00:05:59.244 "claimed": false, 00:05:59.244 "zoned": false, 00:05:59.244 "supported_io_types": { 00:05:59.244 "read": true, 00:05:59.244 "write": true, 00:05:59.244 "unmap": true, 00:05:59.244 "flush": true, 00:05:59.244 "reset": true, 00:05:59.244 "nvme_admin": false, 00:05:59.244 "nvme_io": false, 00:05:59.244 "nvme_io_md": false, 00:05:59.244 "write_zeroes": true, 00:05:59.244 "zcopy": true, 00:05:59.244 "get_zone_info": false, 00:05:59.244 "zone_management": false, 00:05:59.244 "zone_append": false, 00:05:59.244 "compare": false, 00:05:59.244 "compare_and_write": false, 00:05:59.244 "abort": true, 00:05:59.244 "seek_hole": false, 00:05:59.244 "seek_data": false, 00:05:59.244 "copy": true, 00:05:59.244 "nvme_iov_md": false 00:05:59.244 }, 00:05:59.244 "memory_domains": [ 00:05:59.244 { 00:05:59.244 "dma_device_id": "system", 00:05:59.244 "dma_device_type": 1 00:05:59.244 }, 00:05:59.244 { 00:05:59.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.244 "dma_device_type": 2 00:05:59.244 } 00:05:59.244 ], 00:05:59.244 "driver_specific": { 00:05:59.244 "passthru": { 00:05:59.244 "name": "Passthru0", 00:05:59.244 "base_bdev_name": "Malloc0" 00:05:59.244 } 00:05:59.244 } 00:05:59.244 } 00:05:59.244 ]' 00:05:59.244 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:59.244 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:59.244 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:59.244 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.244 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.244 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.244 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:59.244 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.244 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.244 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.244 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:59.244 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.244 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.244 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.244 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:59.244 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:59.244 17:55:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:59.244 00:05:59.244 real 0m0.331s 00:05:59.244 user 0m0.164s 00:05:59.244 sys 0m0.065s 00:05:59.244 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.244 17:55:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:59.244 ************************************ 00:05:59.244 END TEST rpc_integrity 00:05:59.244 ************************************ 00:05:59.244 17:55:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:59.244 17:55:28 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:59.244 17:55:28 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:59.244 17:55:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.244 ************************************ 00:05:59.244 START TEST rpc_plugins 00:05:59.244 ************************************ 00:05:59.244 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:59.244 17:55:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:59.244 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.244 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:59.244 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.244 17:55:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:59.244 17:55:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:59.244 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.244 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:59.504 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.504 17:55:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:59.504 { 00:05:59.504 "name": "Malloc1", 00:05:59.504 "aliases": [ 00:05:59.504 "55457ea2-ce1d-4d2e-80f0-9dfb0ff3927a" 00:05:59.504 ], 00:05:59.504 "product_name": "Malloc disk", 00:05:59.504 "block_size": 4096, 00:05:59.504 "num_blocks": 256, 00:05:59.504 "uuid": "55457ea2-ce1d-4d2e-80f0-9dfb0ff3927a", 00:05:59.504 "assigned_rate_limits": { 00:05:59.504 "rw_ios_per_sec": 0, 00:05:59.504 "rw_mbytes_per_sec": 0, 00:05:59.504 "r_mbytes_per_sec": 0, 00:05:59.504 "w_mbytes_per_sec": 0 00:05:59.504 }, 00:05:59.504 "claimed": false, 00:05:59.504 "zoned": false, 00:05:59.504 "supported_io_types": { 00:05:59.504 "read": true, 00:05:59.504 "write": true, 00:05:59.504 "unmap": true, 00:05:59.504 "flush": true, 00:05:59.504 "reset": true, 00:05:59.504 "nvme_admin": false, 00:05:59.504 "nvme_io": false, 00:05:59.504 "nvme_io_md": false, 00:05:59.504 "write_zeroes": true, 00:05:59.504 "zcopy": true, 00:05:59.504 "get_zone_info": false, 00:05:59.504 "zone_management": false, 00:05:59.504 "zone_append": false, 00:05:59.504 "compare": false, 00:05:59.504 "compare_and_write": false, 00:05:59.504 "abort": true, 00:05:59.504 "seek_hole": false, 00:05:59.504 "seek_data": false, 00:05:59.504 "copy": true, 00:05:59.504 "nvme_iov_md": false 00:05:59.504 }, 00:05:59.504 "memory_domains": [ 00:05:59.504 { 00:05:59.504 "dma_device_id": "system", 00:05:59.504 "dma_device_type": 1 00:05:59.504 }, 00:05:59.504 { 00:05:59.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:59.504 "dma_device_type": 2 00:05:59.504 } 00:05:59.504 ], 00:05:59.504 "driver_specific": {} 00:05:59.504 } 00:05:59.504 ]' 00:05:59.504 17:55:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:59.504 17:55:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:59.504 17:55:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:59.504 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.504 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:59.504 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.504 17:55:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:59.504 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.504 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:59.504 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.504 17:55:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:59.504 17:55:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:59.504 17:55:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:59.504 00:05:59.504 real 0m0.165s 00:05:59.504 user 0m0.093s 00:05:59.504 sys 0m0.033s 00:05:59.504 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.504 17:55:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:59.504 ************************************ 00:05:59.504 END TEST rpc_plugins 00:05:59.504 ************************************ 00:05:59.504 17:55:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:59.504 17:55:28 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:59.504 17:55:28 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:59.504 17:55:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.504 ************************************ 00:05:59.504 START TEST rpc_trace_cmd_test 00:05:59.504 ************************************ 00:05:59.504 17:55:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:59.504 17:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:59.504 17:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:59.504 17:55:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.504 17:55:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:59.504 17:55:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:59.504 17:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:59.504 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57691", 00:05:59.504 "tpoint_group_mask": "0x8", 00:05:59.504 "iscsi_conn": { 00:05:59.504 "mask": "0x2", 00:05:59.504 "tpoint_mask": "0x0" 00:05:59.504 }, 00:05:59.504 "scsi": { 00:05:59.504 "mask": "0x4", 00:05:59.504 "tpoint_mask": "0x0" 00:05:59.504 }, 00:05:59.504 "bdev": { 00:05:59.504 "mask": "0x8", 00:05:59.504 "tpoint_mask": "0xffffffffffffffff" 00:05:59.504 }, 00:05:59.504 "nvmf_rdma": { 00:05:59.504 "mask": "0x10", 00:05:59.504 "tpoint_mask": "0x0" 00:05:59.504 }, 00:05:59.504 "nvmf_tcp": { 00:05:59.504 "mask": "0x20", 00:05:59.504 "tpoint_mask": "0x0" 00:05:59.504 }, 00:05:59.504 "ftl": { 00:05:59.504 "mask": "0x40", 00:05:59.504 "tpoint_mask": "0x0" 00:05:59.504 }, 00:05:59.504 "blobfs": { 00:05:59.504 "mask": "0x80", 00:05:59.504 "tpoint_mask": "0x0" 00:05:59.504 }, 00:05:59.504 "dsa": { 00:05:59.504 "mask": "0x200", 00:05:59.504 "tpoint_mask": "0x0" 00:05:59.504 }, 00:05:59.504 "thread": { 00:05:59.504 "mask": "0x400", 00:05:59.504 "tpoint_mask": "0x0" 00:05:59.504 }, 00:05:59.504 "nvme_pcie": { 00:05:59.504 "mask": "0x800", 00:05:59.504 "tpoint_mask": "0x0" 00:05:59.504 }, 00:05:59.504 "iaa": { 00:05:59.504 "mask": "0x1000", 00:05:59.504 "tpoint_mask": "0x0" 00:05:59.504 }, 00:05:59.504 "nvme_tcp": { 00:05:59.504 "mask": "0x2000", 00:05:59.504 "tpoint_mask": "0x0" 00:05:59.504 }, 00:05:59.504 "bdev_nvme": { 00:05:59.504 "mask": "0x4000", 00:05:59.504 "tpoint_mask": "0x0" 00:05:59.504 }, 00:05:59.504 "sock": { 00:05:59.504 "mask": "0x8000", 00:05:59.504 "tpoint_mask": "0x0" 00:05:59.504 }, 00:05:59.504 "blob": { 00:05:59.504 "mask": "0x10000", 00:05:59.504 "tpoint_mask": "0x0" 00:05:59.505 }, 00:05:59.505 "bdev_raid": { 00:05:59.505 "mask": "0x20000", 00:05:59.505 "tpoint_mask": "0x0" 00:05:59.505 }, 00:05:59.505 "scheduler": { 00:05:59.505 "mask": "0x40000", 00:05:59.505 "tpoint_mask": "0x0" 00:05:59.505 } 00:05:59.505 }' 00:05:59.505 17:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:59.764 17:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:59.764 17:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:59.764 17:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:59.764 17:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:59.764 17:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:59.764 17:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:59.764 17:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:59.764 17:55:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:59.764 17:55:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:59.764 00:05:59.764 real 0m0.248s 00:05:59.764 user 0m0.197s 00:05:59.764 sys 0m0.043s 00:05:59.764 17:55:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.764 17:55:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:59.764 ************************************ 00:05:59.764 END TEST rpc_trace_cmd_test 00:05:59.764 ************************************ 00:05:59.764 17:55:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:59.764 17:55:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:59.764 17:55:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:59.764 17:55:29 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:59.764 17:55:29 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:59.764 17:55:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.764 ************************************ 00:05:59.764 START TEST rpc_daemon_integrity 00:05:59.764 ************************************ 00:05:59.764 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:59.764 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:59.764 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:59.764 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:00.024 { 00:06:00.024 "name": "Malloc2", 00:06:00.024 "aliases": [ 00:06:00.024 "1ea715e1-1751-4e1e-b8c1-81e52bc5cca1" 00:06:00.024 ], 00:06:00.024 "product_name": "Malloc disk", 00:06:00.024 "block_size": 512, 00:06:00.024 "num_blocks": 16384, 00:06:00.024 "uuid": "1ea715e1-1751-4e1e-b8c1-81e52bc5cca1", 00:06:00.024 "assigned_rate_limits": { 00:06:00.024 "rw_ios_per_sec": 0, 00:06:00.024 "rw_mbytes_per_sec": 0, 00:06:00.024 "r_mbytes_per_sec": 0, 00:06:00.024 "w_mbytes_per_sec": 0 00:06:00.024 }, 00:06:00.024 "claimed": false, 00:06:00.024 "zoned": false, 00:06:00.024 "supported_io_types": { 00:06:00.024 "read": true, 00:06:00.024 "write": true, 00:06:00.024 "unmap": true, 00:06:00.024 "flush": true, 00:06:00.024 "reset": true, 00:06:00.024 "nvme_admin": false, 00:06:00.024 "nvme_io": false, 00:06:00.024 "nvme_io_md": false, 00:06:00.024 "write_zeroes": true, 00:06:00.024 "zcopy": true, 00:06:00.024 "get_zone_info": false, 00:06:00.024 "zone_management": false, 00:06:00.024 "zone_append": false, 00:06:00.024 "compare": false, 00:06:00.024 "compare_and_write": false, 00:06:00.024 "abort": true, 00:06:00.024 "seek_hole": false, 00:06:00.024 "seek_data": false, 00:06:00.024 "copy": true, 00:06:00.024 "nvme_iov_md": false 00:06:00.024 }, 00:06:00.024 "memory_domains": [ 00:06:00.024 { 00:06:00.024 "dma_device_id": "system", 00:06:00.024 "dma_device_type": 1 00:06:00.024 }, 00:06:00.024 { 00:06:00.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.024 "dma_device_type": 2 00:06:00.024 } 00:06:00.024 ], 00:06:00.024 "driver_specific": {} 00:06:00.024 } 00:06:00.024 ]' 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.024 [2024-11-05 17:55:29.235543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:00.024 [2024-11-05 17:55:29.235605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:00.024 [2024-11-05 17:55:29.235626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:00.024 [2024-11-05 17:55:29.235640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:00.024 [2024-11-05 17:55:29.238081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:00.024 [2024-11-05 17:55:29.238128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:00.024 Passthru0 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:00.024 { 00:06:00.024 "name": "Malloc2", 00:06:00.024 "aliases": [ 00:06:00.024 "1ea715e1-1751-4e1e-b8c1-81e52bc5cca1" 00:06:00.024 ], 00:06:00.024 "product_name": "Malloc disk", 00:06:00.024 "block_size": 512, 00:06:00.024 "num_blocks": 16384, 00:06:00.024 "uuid": "1ea715e1-1751-4e1e-b8c1-81e52bc5cca1", 00:06:00.024 "assigned_rate_limits": { 00:06:00.024 "rw_ios_per_sec": 0, 00:06:00.024 "rw_mbytes_per_sec": 0, 00:06:00.024 "r_mbytes_per_sec": 0, 00:06:00.024 "w_mbytes_per_sec": 0 00:06:00.024 }, 00:06:00.024 "claimed": true, 00:06:00.024 "claim_type": "exclusive_write", 00:06:00.024 "zoned": false, 00:06:00.024 "supported_io_types": { 00:06:00.024 "read": true, 00:06:00.024 "write": true, 00:06:00.024 "unmap": true, 00:06:00.024 "flush": true, 00:06:00.024 "reset": true, 00:06:00.024 "nvme_admin": false, 00:06:00.024 "nvme_io": false, 00:06:00.024 "nvme_io_md": false, 00:06:00.024 "write_zeroes": true, 00:06:00.024 "zcopy": true, 00:06:00.024 "get_zone_info": false, 00:06:00.024 "zone_management": false, 00:06:00.024 "zone_append": false, 00:06:00.024 "compare": false, 00:06:00.024 "compare_and_write": false, 00:06:00.024 "abort": true, 00:06:00.024 "seek_hole": false, 00:06:00.024 "seek_data": false, 00:06:00.024 "copy": true, 00:06:00.024 "nvme_iov_md": false 00:06:00.024 }, 00:06:00.024 "memory_domains": [ 00:06:00.024 { 00:06:00.024 "dma_device_id": "system", 00:06:00.024 "dma_device_type": 1 00:06:00.024 }, 00:06:00.024 { 00:06:00.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.024 "dma_device_type": 2 00:06:00.024 } 00:06:00.024 ], 00:06:00.024 "driver_specific": {} 00:06:00.024 }, 00:06:00.024 { 00:06:00.024 "name": "Passthru0", 00:06:00.024 "aliases": [ 00:06:00.024 "b0f4b640-fbf9-5095-9de3-aa8b793d6b17" 00:06:00.024 ], 00:06:00.024 "product_name": "passthru", 00:06:00.024 "block_size": 512, 00:06:00.024 "num_blocks": 16384, 00:06:00.024 "uuid": "b0f4b640-fbf9-5095-9de3-aa8b793d6b17", 00:06:00.024 "assigned_rate_limits": { 00:06:00.024 "rw_ios_per_sec": 0, 00:06:00.024 "rw_mbytes_per_sec": 0, 00:06:00.024 "r_mbytes_per_sec": 0, 00:06:00.024 "w_mbytes_per_sec": 0 00:06:00.024 }, 00:06:00.024 "claimed": false, 00:06:00.024 "zoned": false, 00:06:00.024 "supported_io_types": { 00:06:00.024 "read": true, 00:06:00.024 "write": true, 00:06:00.024 "unmap": true, 00:06:00.024 "flush": true, 00:06:00.024 "reset": true, 00:06:00.024 "nvme_admin": false, 00:06:00.024 "nvme_io": false, 00:06:00.024 "nvme_io_md": false, 00:06:00.024 "write_zeroes": true, 00:06:00.024 "zcopy": true, 00:06:00.024 "get_zone_info": false, 00:06:00.024 "zone_management": false, 00:06:00.024 "zone_append": false, 00:06:00.024 "compare": false, 00:06:00.024 "compare_and_write": false, 00:06:00.024 "abort": true, 00:06:00.024 "seek_hole": false, 00:06:00.024 "seek_data": false, 00:06:00.024 "copy": true, 00:06:00.024 "nvme_iov_md": false 00:06:00.024 }, 00:06:00.024 "memory_domains": [ 00:06:00.024 { 00:06:00.024 "dma_device_id": "system", 00:06:00.024 "dma_device_type": 1 00:06:00.024 }, 00:06:00.024 { 00:06:00.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.024 "dma_device_type": 2 00:06:00.024 } 00:06:00.024 ], 00:06:00.024 "driver_specific": { 00:06:00.024 "passthru": { 00:06:00.024 "name": "Passthru0", 00:06:00.024 "base_bdev_name": "Malloc2" 00:06:00.024 } 00:06:00.024 } 00:06:00.024 } 00:06:00.024 ]' 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.024 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.284 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.284 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:00.284 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.284 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.284 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.284 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:00.284 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:00.284 17:55:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:00.284 00:06:00.284 real 0m0.347s 00:06:00.284 user 0m0.191s 00:06:00.284 sys 0m0.066s 00:06:00.284 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.284 ************************************ 00:06:00.284 17:55:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.284 END TEST rpc_daemon_integrity 00:06:00.284 ************************************ 00:06:00.284 17:55:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:00.284 17:55:29 rpc -- rpc/rpc.sh@84 -- # killprocess 57691 00:06:00.284 17:55:29 rpc -- common/autotest_common.sh@952 -- # '[' -z 57691 ']' 00:06:00.284 17:55:29 rpc -- common/autotest_common.sh@956 -- # kill -0 57691 00:06:00.284 17:55:29 rpc -- common/autotest_common.sh@957 -- # uname 00:06:00.284 17:55:29 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:00.284 17:55:29 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57691 00:06:00.284 17:55:29 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:00.284 17:55:29 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:00.284 killing process with pid 57691 00:06:00.284 17:55:29 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57691' 00:06:00.284 17:55:29 rpc -- common/autotest_common.sh@971 -- # kill 57691 00:06:00.284 17:55:29 rpc -- common/autotest_common.sh@976 -- # wait 57691 00:06:02.820 00:06:02.820 real 0m5.275s 00:06:02.820 user 0m5.718s 00:06:02.820 sys 0m1.020s 00:06:02.820 17:55:31 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:02.820 17:55:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.820 ************************************ 00:06:02.820 END TEST rpc 00:06:02.820 ************************************ 00:06:02.820 17:55:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:02.820 17:55:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:02.820 17:55:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:02.820 17:55:31 -- common/autotest_common.sh@10 -- # set +x 00:06:02.820 ************************************ 00:06:02.820 START TEST skip_rpc 00:06:02.820 ************************************ 00:06:02.820 17:55:31 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:02.820 * Looking for test storage... 00:06:02.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:02.820 17:55:32 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:02.820 17:55:32 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:02.820 17:55:32 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:02.820 17:55:32 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.820 17:55:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:03.080 17:55:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:03.080 17:55:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.080 17:55:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:03.080 17:55:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.080 17:55:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:03.080 17:55:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:03.080 17:55:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.080 17:55:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:03.080 17:55:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.080 17:55:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.080 17:55:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.080 17:55:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:03.080 17:55:32 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.080 17:55:32 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:03.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.080 --rc genhtml_branch_coverage=1 00:06:03.080 --rc genhtml_function_coverage=1 00:06:03.080 --rc genhtml_legend=1 00:06:03.080 --rc geninfo_all_blocks=1 00:06:03.080 --rc geninfo_unexecuted_blocks=1 00:06:03.080 00:06:03.080 ' 00:06:03.080 17:55:32 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:03.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.080 --rc genhtml_branch_coverage=1 00:06:03.080 --rc genhtml_function_coverage=1 00:06:03.080 --rc genhtml_legend=1 00:06:03.080 --rc geninfo_all_blocks=1 00:06:03.080 --rc geninfo_unexecuted_blocks=1 00:06:03.080 00:06:03.080 ' 00:06:03.080 17:55:32 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:03.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.080 --rc genhtml_branch_coverage=1 00:06:03.080 --rc genhtml_function_coverage=1 00:06:03.080 --rc genhtml_legend=1 00:06:03.080 --rc geninfo_all_blocks=1 00:06:03.080 --rc geninfo_unexecuted_blocks=1 00:06:03.080 00:06:03.080 ' 00:06:03.080 17:55:32 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:03.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.080 --rc genhtml_branch_coverage=1 00:06:03.080 --rc genhtml_function_coverage=1 00:06:03.080 --rc genhtml_legend=1 00:06:03.080 --rc geninfo_all_blocks=1 00:06:03.080 --rc geninfo_unexecuted_blocks=1 00:06:03.080 00:06:03.080 ' 00:06:03.080 17:55:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:03.080 17:55:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:03.080 17:55:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:03.080 17:55:32 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:03.080 17:55:32 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.080 17:55:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.080 ************************************ 00:06:03.080 START TEST skip_rpc 00:06:03.080 ************************************ 00:06:03.080 17:55:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:06:03.080 17:55:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57923 00:06:03.080 17:55:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:03.080 17:55:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.080 17:55:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:03.080 [2024-11-05 17:55:32.290208] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:03.080 [2024-11-05 17:55:32.290334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57923 ] 00:06:03.340 [2024-11-05 17:55:32.471997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.340 [2024-11-05 17:55:32.577390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57923 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57923 ']' 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57923 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57923 00:06:08.613 killing process with pid 57923 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57923' 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57923 00:06:08.613 17:55:37 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57923 00:06:10.519 00:06:10.519 real 0m7.396s 00:06:10.519 user 0m6.876s 00:06:10.519 sys 0m0.438s 00:06:10.519 17:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.519 ************************************ 00:06:10.519 17:55:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.519 END TEST skip_rpc 00:06:10.519 ************************************ 00:06:10.519 17:55:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:10.519 17:55:39 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:10.519 17:55:39 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.519 17:55:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.519 ************************************ 00:06:10.519 START TEST skip_rpc_with_json 00:06:10.519 ************************************ 00:06:10.519 17:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:06:10.519 17:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:10.519 17:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58033 00:06:10.519 17:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.519 17:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.519 17:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58033 00:06:10.519 17:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 58033 ']' 00:06:10.519 17:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.519 17:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:10.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.519 17:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.519 17:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:10.519 17:55:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:10.519 [2024-11-05 17:55:39.759198] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:10.519 [2024-11-05 17:55:39.759344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58033 ] 00:06:10.778 [2024-11-05 17:55:39.940811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.778 [2024-11-05 17:55:40.051149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.716 17:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:11.716 17:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:06:11.716 17:55:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:11.716 17:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.717 17:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:11.717 [2024-11-05 17:55:40.938644] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:11.717 request: 00:06:11.717 { 00:06:11.717 "trtype": "tcp", 00:06:11.717 "method": "nvmf_get_transports", 00:06:11.717 "req_id": 1 00:06:11.717 } 00:06:11.717 Got JSON-RPC error response 00:06:11.717 response: 00:06:11.717 { 00:06:11.717 "code": -19, 00:06:11.717 "message": "No such device" 00:06:11.717 } 00:06:11.717 17:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:11.717 17:55:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:11.717 17:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.717 17:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:11.717 [2024-11-05 17:55:40.950743] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.717 17:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.717 17:55:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:11.717 17:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.717 17:55:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:11.976 17:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.976 17:55:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:11.976 { 00:06:11.976 "subsystems": [ 00:06:11.976 { 00:06:11.976 "subsystem": "fsdev", 00:06:11.976 "config": [ 00:06:11.976 { 00:06:11.976 "method": "fsdev_set_opts", 00:06:11.977 "params": { 00:06:11.977 "fsdev_io_pool_size": 65535, 00:06:11.977 "fsdev_io_cache_size": 256 00:06:11.977 } 00:06:11.977 } 00:06:11.977 ] 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "subsystem": "keyring", 00:06:11.977 "config": [] 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "subsystem": "iobuf", 00:06:11.977 "config": [ 00:06:11.977 { 00:06:11.977 "method": "iobuf_set_options", 00:06:11.977 "params": { 00:06:11.977 "small_pool_count": 8192, 00:06:11.977 "large_pool_count": 1024, 00:06:11.977 "small_bufsize": 8192, 00:06:11.977 "large_bufsize": 135168, 00:06:11.977 "enable_numa": false 00:06:11.977 } 00:06:11.977 } 00:06:11.977 ] 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "subsystem": "sock", 00:06:11.977 "config": [ 00:06:11.977 { 00:06:11.977 "method": "sock_set_default_impl", 00:06:11.977 "params": { 00:06:11.977 "impl_name": "posix" 00:06:11.977 } 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "method": "sock_impl_set_options", 00:06:11.977 "params": { 00:06:11.977 "impl_name": "ssl", 00:06:11.977 "recv_buf_size": 4096, 00:06:11.977 "send_buf_size": 4096, 00:06:11.977 "enable_recv_pipe": true, 00:06:11.977 "enable_quickack": false, 00:06:11.977 "enable_placement_id": 0, 00:06:11.977 "enable_zerocopy_send_server": true, 00:06:11.977 "enable_zerocopy_send_client": false, 00:06:11.977 "zerocopy_threshold": 0, 00:06:11.977 "tls_version": 0, 00:06:11.977 "enable_ktls": false 00:06:11.977 } 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "method": "sock_impl_set_options", 00:06:11.977 "params": { 00:06:11.977 "impl_name": "posix", 00:06:11.977 "recv_buf_size": 2097152, 00:06:11.977 "send_buf_size": 2097152, 00:06:11.977 "enable_recv_pipe": true, 00:06:11.977 "enable_quickack": false, 00:06:11.977 "enable_placement_id": 0, 00:06:11.977 "enable_zerocopy_send_server": true, 00:06:11.977 "enable_zerocopy_send_client": false, 00:06:11.977 "zerocopy_threshold": 0, 00:06:11.977 "tls_version": 0, 00:06:11.977 "enable_ktls": false 00:06:11.977 } 00:06:11.977 } 00:06:11.977 ] 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "subsystem": "vmd", 00:06:11.977 "config": [] 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "subsystem": "accel", 00:06:11.977 "config": [ 00:06:11.977 { 00:06:11.977 "method": "accel_set_options", 00:06:11.977 "params": { 00:06:11.977 "small_cache_size": 128, 00:06:11.977 "large_cache_size": 16, 00:06:11.977 "task_count": 2048, 00:06:11.977 "sequence_count": 2048, 00:06:11.977 "buf_count": 2048 00:06:11.977 } 00:06:11.977 } 00:06:11.977 ] 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "subsystem": "bdev", 00:06:11.977 "config": [ 00:06:11.977 { 00:06:11.977 "method": "bdev_set_options", 00:06:11.977 "params": { 00:06:11.977 "bdev_io_pool_size": 65535, 00:06:11.977 "bdev_io_cache_size": 256, 00:06:11.977 "bdev_auto_examine": true, 00:06:11.977 "iobuf_small_cache_size": 128, 00:06:11.977 "iobuf_large_cache_size": 16 00:06:11.977 } 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "method": "bdev_raid_set_options", 00:06:11.977 "params": { 00:06:11.977 "process_window_size_kb": 1024, 00:06:11.977 "process_max_bandwidth_mb_sec": 0 00:06:11.977 } 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "method": "bdev_iscsi_set_options", 00:06:11.977 "params": { 00:06:11.977 "timeout_sec": 30 00:06:11.977 } 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "method": "bdev_nvme_set_options", 00:06:11.977 "params": { 00:06:11.977 "action_on_timeout": "none", 00:06:11.977 "timeout_us": 0, 00:06:11.977 "timeout_admin_us": 0, 00:06:11.977 "keep_alive_timeout_ms": 10000, 00:06:11.977 "arbitration_burst": 0, 00:06:11.977 "low_priority_weight": 0, 00:06:11.977 "medium_priority_weight": 0, 00:06:11.977 "high_priority_weight": 0, 00:06:11.977 "nvme_adminq_poll_period_us": 10000, 00:06:11.977 "nvme_ioq_poll_period_us": 0, 00:06:11.977 "io_queue_requests": 0, 00:06:11.977 "delay_cmd_submit": true, 00:06:11.977 "transport_retry_count": 4, 00:06:11.977 "bdev_retry_count": 3, 00:06:11.977 "transport_ack_timeout": 0, 00:06:11.977 "ctrlr_loss_timeout_sec": 0, 00:06:11.977 "reconnect_delay_sec": 0, 00:06:11.977 "fast_io_fail_timeout_sec": 0, 00:06:11.977 "disable_auto_failback": false, 00:06:11.977 "generate_uuids": false, 00:06:11.977 "transport_tos": 0, 00:06:11.977 "nvme_error_stat": false, 00:06:11.977 "rdma_srq_size": 0, 00:06:11.977 "io_path_stat": false, 00:06:11.977 "allow_accel_sequence": false, 00:06:11.977 "rdma_max_cq_size": 0, 00:06:11.977 "rdma_cm_event_timeout_ms": 0, 00:06:11.977 "dhchap_digests": [ 00:06:11.977 "sha256", 00:06:11.977 "sha384", 00:06:11.977 "sha512" 00:06:11.977 ], 00:06:11.977 "dhchap_dhgroups": [ 00:06:11.977 "null", 00:06:11.977 "ffdhe2048", 00:06:11.977 "ffdhe3072", 00:06:11.977 "ffdhe4096", 00:06:11.977 "ffdhe6144", 00:06:11.977 "ffdhe8192" 00:06:11.977 ] 00:06:11.977 } 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "method": "bdev_nvme_set_hotplug", 00:06:11.977 "params": { 00:06:11.977 "period_us": 100000, 00:06:11.977 "enable": false 00:06:11.977 } 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "method": "bdev_wait_for_examine" 00:06:11.977 } 00:06:11.977 ] 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "subsystem": "scsi", 00:06:11.977 "config": null 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "subsystem": "scheduler", 00:06:11.977 "config": [ 00:06:11.977 { 00:06:11.977 "method": "framework_set_scheduler", 00:06:11.977 "params": { 00:06:11.977 "name": "static" 00:06:11.977 } 00:06:11.977 } 00:06:11.977 ] 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "subsystem": "vhost_scsi", 00:06:11.977 "config": [] 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "subsystem": "vhost_blk", 00:06:11.977 "config": [] 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "subsystem": "ublk", 00:06:11.977 "config": [] 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "subsystem": "nbd", 00:06:11.977 "config": [] 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "subsystem": "nvmf", 00:06:11.977 "config": [ 00:06:11.977 { 00:06:11.977 "method": "nvmf_set_config", 00:06:11.977 "params": { 00:06:11.977 "discovery_filter": "match_any", 00:06:11.977 "admin_cmd_passthru": { 00:06:11.977 "identify_ctrlr": false 00:06:11.977 }, 00:06:11.977 "dhchap_digests": [ 00:06:11.977 "sha256", 00:06:11.977 "sha384", 00:06:11.977 "sha512" 00:06:11.977 ], 00:06:11.977 "dhchap_dhgroups": [ 00:06:11.977 "null", 00:06:11.977 "ffdhe2048", 00:06:11.977 "ffdhe3072", 00:06:11.977 "ffdhe4096", 00:06:11.977 "ffdhe6144", 00:06:11.977 "ffdhe8192" 00:06:11.977 ] 00:06:11.977 } 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "method": "nvmf_set_max_subsystems", 00:06:11.977 "params": { 00:06:11.977 "max_subsystems": 1024 00:06:11.977 } 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "method": "nvmf_set_crdt", 00:06:11.977 "params": { 00:06:11.977 "crdt1": 0, 00:06:11.977 "crdt2": 0, 00:06:11.977 "crdt3": 0 00:06:11.977 } 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "method": "nvmf_create_transport", 00:06:11.977 "params": { 00:06:11.977 "trtype": "TCP", 00:06:11.977 "max_queue_depth": 128, 00:06:11.977 "max_io_qpairs_per_ctrlr": 127, 00:06:11.977 "in_capsule_data_size": 4096, 00:06:11.977 "max_io_size": 131072, 00:06:11.977 "io_unit_size": 131072, 00:06:11.977 "max_aq_depth": 128, 00:06:11.977 "num_shared_buffers": 511, 00:06:11.977 "buf_cache_size": 4294967295, 00:06:11.977 "dif_insert_or_strip": false, 00:06:11.977 "zcopy": false, 00:06:11.977 "c2h_success": true, 00:06:11.977 "sock_priority": 0, 00:06:11.977 "abort_timeout_sec": 1, 00:06:11.977 "ack_timeout": 0, 00:06:11.977 "data_wr_pool_size": 0 00:06:11.977 } 00:06:11.977 } 00:06:11.977 ] 00:06:11.977 }, 00:06:11.977 { 00:06:11.977 "subsystem": "iscsi", 00:06:11.977 "config": [ 00:06:11.977 { 00:06:11.977 "method": "iscsi_set_options", 00:06:11.977 "params": { 00:06:11.977 "node_base": "iqn.2016-06.io.spdk", 00:06:11.977 "max_sessions": 128, 00:06:11.977 "max_connections_per_session": 2, 00:06:11.977 "max_queue_depth": 64, 00:06:11.977 "default_time2wait": 2, 00:06:11.977 "default_time2retain": 20, 00:06:11.977 "first_burst_length": 8192, 00:06:11.977 "immediate_data": true, 00:06:11.977 "allow_duplicated_isid": false, 00:06:11.977 "error_recovery_level": 0, 00:06:11.977 "nop_timeout": 60, 00:06:11.977 "nop_in_interval": 30, 00:06:11.977 "disable_chap": false, 00:06:11.977 "require_chap": false, 00:06:11.977 "mutual_chap": false, 00:06:11.977 "chap_group": 0, 00:06:11.977 "max_large_datain_per_connection": 64, 00:06:11.977 "max_r2t_per_connection": 4, 00:06:11.977 "pdu_pool_size": 36864, 00:06:11.977 "immediate_data_pool_size": 16384, 00:06:11.977 "data_out_pool_size": 2048 00:06:11.978 } 00:06:11.978 } 00:06:11.978 ] 00:06:11.978 } 00:06:11.978 ] 00:06:11.978 } 00:06:11.978 17:55:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:11.978 17:55:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58033 00:06:11.978 17:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58033 ']' 00:06:11.978 17:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58033 00:06:11.978 17:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:11.978 17:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:11.978 17:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58033 00:06:11.978 killing process with pid 58033 00:06:11.978 17:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:11.978 17:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:11.978 17:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58033' 00:06:11.978 17:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58033 00:06:11.978 17:55:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58033 00:06:14.548 17:55:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58078 00:06:14.548 17:55:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:14.548 17:55:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:19.823 17:55:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58078 00:06:19.823 17:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58078 ']' 00:06:19.823 17:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58078 00:06:19.823 17:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:19.823 17:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:19.823 17:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58078 00:06:19.823 17:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:19.823 17:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:19.823 killing process with pid 58078 00:06:19.823 17:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58078' 00:06:19.823 17:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58078 00:06:19.823 17:55:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58078 00:06:21.725 17:55:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:21.725 17:55:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:21.725 00:06:21.725 real 0m11.277s 00:06:21.725 user 0m10.655s 00:06:21.725 sys 0m0.960s 00:06:21.725 17:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.725 ************************************ 00:06:21.725 17:55:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:21.725 END TEST skip_rpc_with_json 00:06:21.725 ************************************ 00:06:21.725 17:55:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:21.725 17:55:50 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:21.725 17:55:50 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.725 17:55:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.725 ************************************ 00:06:21.725 START TEST skip_rpc_with_delay 00:06:21.725 ************************************ 00:06:21.725 17:55:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:06:21.725 17:55:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:21.725 17:55:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:21.725 17:55:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:21.725 17:55:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.725 17:55:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.725 17:55:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.725 17:55:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.725 17:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.725 17:55:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.725 17:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.725 17:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:21.725 17:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:21.984 [2024-11-05 17:55:51.111816] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:21.984 17:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:21.984 ************************************ 00:06:21.984 END TEST skip_rpc_with_delay 00:06:21.984 ************************************ 00:06:21.984 17:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.984 17:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.984 17:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.984 00:06:21.984 real 0m0.175s 00:06:21.984 user 0m0.089s 00:06:21.984 sys 0m0.084s 00:06:21.984 17:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.984 17:55:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:21.984 17:55:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:21.984 17:55:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:21.984 17:55:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:21.984 17:55:51 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:21.984 17:55:51 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.984 17:55:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.984 ************************************ 00:06:21.984 START TEST exit_on_failed_rpc_init 00:06:21.984 ************************************ 00:06:21.984 17:55:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:06:21.984 17:55:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58217 00:06:21.984 17:55:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.984 17:55:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58217 00:06:21.984 17:55:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 58217 ']' 00:06:21.984 17:55:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.984 17:55:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.984 17:55:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.984 17:55:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.984 17:55:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:22.243 [2024-11-05 17:55:51.354081] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:22.243 [2024-11-05 17:55:51.354385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58217 ] 00:06:22.243 [2024-11-05 17:55:51.534690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.502 [2024-11-05 17:55:51.638120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:23.440 17:55:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:23.440 [2024-11-05 17:55:52.510560] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:23.440 [2024-11-05 17:55:52.510859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58235 ] 00:06:23.440 [2024-11-05 17:55:52.690529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.699 [2024-11-05 17:55:52.801897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.699 [2024-11-05 17:55:52.802214] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:23.699 [2024-11-05 17:55:52.802242] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:23.699 [2024-11-05 17:55:52.802266] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58217 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 58217 ']' 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 58217 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58217 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58217' 00:06:23.959 killing process with pid 58217 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 58217 00:06:23.959 17:55:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 58217 00:06:26.495 00:06:26.495 real 0m4.139s 00:06:26.495 user 0m4.396s 00:06:26.495 sys 0m0.600s 00:06:26.495 17:55:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.495 17:55:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:26.495 ************************************ 00:06:26.495 END TEST exit_on_failed_rpc_init 00:06:26.495 ************************************ 00:06:26.495 17:55:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:26.495 00:06:26.495 real 0m23.514s 00:06:26.495 user 0m22.240s 00:06:26.495 sys 0m2.388s 00:06:26.495 17:55:55 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.495 ************************************ 00:06:26.495 END TEST skip_rpc 00:06:26.495 ************************************ 00:06:26.495 17:55:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.495 17:55:55 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:26.495 17:55:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:26.495 17:55:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.495 17:55:55 -- common/autotest_common.sh@10 -- # set +x 00:06:26.495 ************************************ 00:06:26.495 START TEST rpc_client 00:06:26.495 ************************************ 00:06:26.495 17:55:55 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:26.495 * Looking for test storage... 00:06:26.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:26.495 17:55:55 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:26.495 17:55:55 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:26.495 17:55:55 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:26.495 17:55:55 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.495 17:55:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:26.495 17:55:55 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.495 17:55:55 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:26.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.495 --rc genhtml_branch_coverage=1 00:06:26.495 --rc genhtml_function_coverage=1 00:06:26.495 --rc genhtml_legend=1 00:06:26.495 --rc geninfo_all_blocks=1 00:06:26.495 --rc geninfo_unexecuted_blocks=1 00:06:26.495 00:06:26.495 ' 00:06:26.495 17:55:55 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:26.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.495 --rc genhtml_branch_coverage=1 00:06:26.495 --rc genhtml_function_coverage=1 00:06:26.495 --rc genhtml_legend=1 00:06:26.495 --rc geninfo_all_blocks=1 00:06:26.495 --rc geninfo_unexecuted_blocks=1 00:06:26.495 00:06:26.495 ' 00:06:26.495 17:55:55 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:26.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.495 --rc genhtml_branch_coverage=1 00:06:26.495 --rc genhtml_function_coverage=1 00:06:26.495 --rc genhtml_legend=1 00:06:26.495 --rc geninfo_all_blocks=1 00:06:26.495 --rc geninfo_unexecuted_blocks=1 00:06:26.495 00:06:26.495 ' 00:06:26.496 17:55:55 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:26.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.496 --rc genhtml_branch_coverage=1 00:06:26.496 --rc genhtml_function_coverage=1 00:06:26.496 --rc genhtml_legend=1 00:06:26.496 --rc geninfo_all_blocks=1 00:06:26.496 --rc geninfo_unexecuted_blocks=1 00:06:26.496 00:06:26.496 ' 00:06:26.496 17:55:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:26.496 OK 00:06:26.755 17:55:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:26.755 00:06:26.755 real 0m0.310s 00:06:26.755 user 0m0.167s 00:06:26.755 sys 0m0.159s 00:06:26.755 17:55:55 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.755 17:55:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:26.755 ************************************ 00:06:26.755 END TEST rpc_client 00:06:26.755 ************************************ 00:06:26.755 17:55:55 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:26.755 17:55:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:26.755 17:55:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.755 17:55:55 -- common/autotest_common.sh@10 -- # set +x 00:06:26.755 ************************************ 00:06:26.755 START TEST json_config 00:06:26.755 ************************************ 00:06:26.755 17:55:55 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:26.755 17:55:56 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:26.755 17:55:56 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:26.755 17:55:56 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:27.015 17:55:56 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:27.015 17:55:56 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.015 17:55:56 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.015 17:55:56 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.015 17:55:56 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.015 17:55:56 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.015 17:55:56 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.015 17:55:56 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.015 17:55:56 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.015 17:55:56 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.015 17:55:56 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.015 17:55:56 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.015 17:55:56 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:27.015 17:55:56 json_config -- scripts/common.sh@345 -- # : 1 00:06:27.015 17:55:56 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.015 17:55:56 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.015 17:55:56 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:27.015 17:55:56 json_config -- scripts/common.sh@353 -- # local d=1 00:06:27.015 17:55:56 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.015 17:55:56 json_config -- scripts/common.sh@355 -- # echo 1 00:06:27.015 17:55:56 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.015 17:55:56 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:27.015 17:55:56 json_config -- scripts/common.sh@353 -- # local d=2 00:06:27.015 17:55:56 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.015 17:55:56 json_config -- scripts/common.sh@355 -- # echo 2 00:06:27.015 17:55:56 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.015 17:55:56 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.015 17:55:56 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.015 17:55:56 json_config -- scripts/common.sh@368 -- # return 0 00:06:27.015 17:55:56 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.015 17:55:56 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:27.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.015 --rc genhtml_branch_coverage=1 00:06:27.015 --rc genhtml_function_coverage=1 00:06:27.015 --rc genhtml_legend=1 00:06:27.015 --rc geninfo_all_blocks=1 00:06:27.015 --rc geninfo_unexecuted_blocks=1 00:06:27.015 00:06:27.015 ' 00:06:27.015 17:55:56 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:27.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.015 --rc genhtml_branch_coverage=1 00:06:27.015 --rc genhtml_function_coverage=1 00:06:27.015 --rc genhtml_legend=1 00:06:27.015 --rc geninfo_all_blocks=1 00:06:27.015 --rc geninfo_unexecuted_blocks=1 00:06:27.015 00:06:27.015 ' 00:06:27.015 17:55:56 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:27.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.015 --rc genhtml_branch_coverage=1 00:06:27.015 --rc genhtml_function_coverage=1 00:06:27.015 --rc genhtml_legend=1 00:06:27.015 --rc geninfo_all_blocks=1 00:06:27.015 --rc geninfo_unexecuted_blocks=1 00:06:27.015 00:06:27.015 ' 00:06:27.015 17:55:56 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:27.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.015 --rc genhtml_branch_coverage=1 00:06:27.015 --rc genhtml_function_coverage=1 00:06:27.015 --rc genhtml_legend=1 00:06:27.015 --rc geninfo_all_blocks=1 00:06:27.015 --rc geninfo_unexecuted_blocks=1 00:06:27.015 00:06:27.015 ' 00:06:27.015 17:55:56 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9149b43b-a128-4f4b-a4f1-526b0f9933e8 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=9149b43b-a128-4f4b-a4f1-526b0f9933e8 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.015 17:55:56 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.015 17:55:56 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.015 17:55:56 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.015 17:55:56 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.015 17:55:56 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.015 17:55:56 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.015 17:55:56 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.015 17:55:56 json_config -- paths/export.sh@5 -- # export PATH 00:06:27.015 17:55:56 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:06:27.015 17:55:56 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:27.015 17:55:56 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:27.015 17:55:56 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@50 -- # : 0 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:27.015 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:27.015 17:55:56 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:27.015 17:55:56 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:27.015 17:55:56 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:27.015 17:55:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:27.015 17:55:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:27.015 17:55:56 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:27.015 17:55:56 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:27.015 WARNING: No tests are enabled so not running JSON configuration tests 00:06:27.015 17:55:56 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:27.015 ************************************ 00:06:27.015 END TEST json_config 00:06:27.015 ************************************ 00:06:27.015 00:06:27.015 real 0m0.232s 00:06:27.015 user 0m0.135s 00:06:27.015 sys 0m0.095s 00:06:27.015 17:55:56 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.015 17:55:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.015 17:55:56 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:27.015 17:55:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:27.015 17:55:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.015 17:55:56 -- common/autotest_common.sh@10 -- # set +x 00:06:27.015 ************************************ 00:06:27.015 START TEST json_config_extra_key 00:06:27.015 ************************************ 00:06:27.015 17:55:56 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:27.015 17:55:56 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:27.015 17:55:56 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:27.015 17:55:56 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:27.275 17:55:56 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:27.275 17:55:56 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.275 17:55:56 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:27.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.275 --rc genhtml_branch_coverage=1 00:06:27.275 --rc genhtml_function_coverage=1 00:06:27.275 --rc genhtml_legend=1 00:06:27.275 --rc geninfo_all_blocks=1 00:06:27.275 --rc geninfo_unexecuted_blocks=1 00:06:27.275 00:06:27.275 ' 00:06:27.275 17:55:56 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:27.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.275 --rc genhtml_branch_coverage=1 00:06:27.275 --rc genhtml_function_coverage=1 00:06:27.275 --rc genhtml_legend=1 00:06:27.275 --rc geninfo_all_blocks=1 00:06:27.275 --rc geninfo_unexecuted_blocks=1 00:06:27.275 00:06:27.275 ' 00:06:27.275 17:55:56 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:27.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.275 --rc genhtml_branch_coverage=1 00:06:27.275 --rc genhtml_function_coverage=1 00:06:27.275 --rc genhtml_legend=1 00:06:27.275 --rc geninfo_all_blocks=1 00:06:27.275 --rc geninfo_unexecuted_blocks=1 00:06:27.275 00:06:27.275 ' 00:06:27.275 17:55:56 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:27.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.275 --rc genhtml_branch_coverage=1 00:06:27.275 --rc genhtml_function_coverage=1 00:06:27.275 --rc genhtml_legend=1 00:06:27.275 --rc geninfo_all_blocks=1 00:06:27.275 --rc geninfo_unexecuted_blocks=1 00:06:27.275 00:06:27.275 ' 00:06:27.275 17:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9149b43b-a128-4f4b-a4f1-526b0f9933e8 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=9149b43b-a128-4f4b-a4f1-526b0f9933e8 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.275 17:55:56 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.275 17:55:56 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.275 17:55:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.276 17:55:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.276 17:55:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.276 17:55:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:27.276 17:55:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.276 17:55:56 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:06:27.276 17:55:56 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:27.276 17:55:56 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:27.276 17:55:56 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:27.276 17:55:56 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:06:27.276 17:55:56 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:27.276 17:55:56 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:27.276 17:55:56 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:27.276 17:55:56 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.276 17:55:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.276 17:55:56 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:27.276 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:27.276 17:55:56 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:27.276 17:55:56 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:27.276 17:55:56 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:27.276 17:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:27.276 17:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:27.276 17:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:27.276 17:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:27.276 17:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:27.276 17:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:27.276 17:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:27.276 17:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:27.276 17:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:27.276 17:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:27.276 17:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:27.276 INFO: launching applications... 00:06:27.276 17:55:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:27.276 17:55:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:27.276 17:55:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:27.276 17:55:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:27.276 17:55:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:27.276 17:55:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:27.276 17:55:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.276 17:55:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.276 17:55:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58445 00:06:27.276 17:55:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:27.276 Waiting for target to run... 00:06:27.276 17:55:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58445 /var/tmp/spdk_tgt.sock 00:06:27.276 17:55:56 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 58445 ']' 00:06:27.276 17:55:56 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:27.276 17:55:56 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:27.276 17:55:56 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:27.276 17:55:56 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:27.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:27.276 17:55:56 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:27.276 17:55:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:27.276 [2024-11-05 17:55:56.559884] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:27.276 [2024-11-05 17:55:56.560199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58445 ] 00:06:27.844 [2024-11-05 17:55:56.952456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.844 [2024-11-05 17:55:57.040838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.423 00:06:28.423 17:55:57 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:28.423 17:55:57 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:06:28.423 17:55:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:28.423 17:55:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:28.423 INFO: shutting down applications... 00:06:28.423 17:55:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:28.423 17:55:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:28.423 17:55:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:28.423 17:55:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58445 ]] 00:06:28.423 17:55:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58445 00:06:28.423 17:55:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:28.423 17:55:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.423 17:55:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58445 00:06:28.423 17:55:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.039 17:55:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.039 17:55:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.039 17:55:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58445 00:06:29.039 17:55:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.607 17:55:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.607 17:55:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.607 17:55:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58445 00:06:29.607 17:55:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:30.174 17:55:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:30.174 17:55:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.174 17:55:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58445 00:06:30.174 17:55:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:30.437 17:55:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:30.437 17:55:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:30.437 17:55:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58445 00:06:30.437 17:55:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:31.005 17:56:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:31.005 17:56:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:31.005 17:56:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58445 00:06:31.005 17:56:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:31.572 17:56:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:31.572 17:56:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:31.572 17:56:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58445 00:06:31.572 SPDK target shutdown done 00:06:31.572 Success 00:06:31.572 17:56:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:31.572 17:56:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:31.572 17:56:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:31.572 17:56:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:31.572 17:56:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:31.572 00:06:31.572 real 0m4.511s 00:06:31.572 user 0m3.813s 00:06:31.572 sys 0m0.593s 00:06:31.572 17:56:00 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:31.572 17:56:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:31.572 ************************************ 00:06:31.572 END TEST json_config_extra_key 00:06:31.572 ************************************ 00:06:31.572 17:56:00 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:31.573 17:56:00 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:31.573 17:56:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:31.573 17:56:00 -- common/autotest_common.sh@10 -- # set +x 00:06:31.573 ************************************ 00:06:31.573 START TEST alias_rpc 00:06:31.573 ************************************ 00:06:31.573 17:56:00 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:31.832 * Looking for test storage... 00:06:31.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:31.832 17:56:00 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:31.832 17:56:00 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:31.832 17:56:00 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:31.832 17:56:01 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.832 17:56:01 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:31.832 17:56:01 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.832 17:56:01 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:31.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.832 --rc genhtml_branch_coverage=1 00:06:31.832 --rc genhtml_function_coverage=1 00:06:31.832 --rc genhtml_legend=1 00:06:31.832 --rc geninfo_all_blocks=1 00:06:31.832 --rc geninfo_unexecuted_blocks=1 00:06:31.832 00:06:31.832 ' 00:06:31.832 17:56:01 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:31.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.832 --rc genhtml_branch_coverage=1 00:06:31.832 --rc genhtml_function_coverage=1 00:06:31.832 --rc genhtml_legend=1 00:06:31.832 --rc geninfo_all_blocks=1 00:06:31.832 --rc geninfo_unexecuted_blocks=1 00:06:31.832 00:06:31.832 ' 00:06:31.832 17:56:01 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:31.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.832 --rc genhtml_branch_coverage=1 00:06:31.832 --rc genhtml_function_coverage=1 00:06:31.832 --rc genhtml_legend=1 00:06:31.832 --rc geninfo_all_blocks=1 00:06:31.832 --rc geninfo_unexecuted_blocks=1 00:06:31.832 00:06:31.832 ' 00:06:31.832 17:56:01 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:31.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.832 --rc genhtml_branch_coverage=1 00:06:31.832 --rc genhtml_function_coverage=1 00:06:31.832 --rc genhtml_legend=1 00:06:31.832 --rc geninfo_all_blocks=1 00:06:31.832 --rc geninfo_unexecuted_blocks=1 00:06:31.832 00:06:31.832 ' 00:06:31.832 17:56:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:31.832 17:56:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58551 00:06:31.832 17:56:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:31.832 17:56:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58551 00:06:31.832 17:56:01 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 58551 ']' 00:06:31.832 17:56:01 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.832 17:56:01 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:31.832 17:56:01 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.832 17:56:01 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:31.832 17:56:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.832 [2024-11-05 17:56:01.144371] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:31.832 [2024-11-05 17:56:01.144728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58551 ] 00:06:32.091 [2024-11-05 17:56:01.327144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.349 [2024-11-05 17:56:01.429180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.916 17:56:02 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:32.916 17:56:02 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:32.916 17:56:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:33.176 17:56:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58551 00:06:33.176 17:56:02 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 58551 ']' 00:06:33.176 17:56:02 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 58551 00:06:33.176 17:56:02 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:33.176 17:56:02 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:33.176 17:56:02 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58551 00:06:33.434 killing process with pid 58551 00:06:33.434 17:56:02 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:33.434 17:56:02 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:33.434 17:56:02 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58551' 00:06:33.434 17:56:02 alias_rpc -- common/autotest_common.sh@971 -- # kill 58551 00:06:33.434 17:56:02 alias_rpc -- common/autotest_common.sh@976 -- # wait 58551 00:06:35.973 ************************************ 00:06:35.973 END TEST alias_rpc 00:06:35.973 ************************************ 00:06:35.973 00:06:35.973 real 0m3.996s 00:06:35.973 user 0m3.919s 00:06:35.973 sys 0m0.624s 00:06:35.973 17:56:04 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.973 17:56:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.973 17:56:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:35.973 17:56:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:35.973 17:56:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:35.973 17:56:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.973 17:56:04 -- common/autotest_common.sh@10 -- # set +x 00:06:35.973 ************************************ 00:06:35.973 START TEST spdkcli_tcp 00:06:35.973 ************************************ 00:06:35.973 17:56:04 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:35.973 * Looking for test storage... 00:06:35.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.973 17:56:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:35.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.973 --rc genhtml_branch_coverage=1 00:06:35.973 --rc genhtml_function_coverage=1 00:06:35.973 --rc genhtml_legend=1 00:06:35.973 --rc geninfo_all_blocks=1 00:06:35.973 --rc geninfo_unexecuted_blocks=1 00:06:35.973 00:06:35.973 ' 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:35.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.973 --rc genhtml_branch_coverage=1 00:06:35.973 --rc genhtml_function_coverage=1 00:06:35.973 --rc genhtml_legend=1 00:06:35.973 --rc geninfo_all_blocks=1 00:06:35.973 --rc geninfo_unexecuted_blocks=1 00:06:35.973 00:06:35.973 ' 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:35.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.973 --rc genhtml_branch_coverage=1 00:06:35.973 --rc genhtml_function_coverage=1 00:06:35.973 --rc genhtml_legend=1 00:06:35.973 --rc geninfo_all_blocks=1 00:06:35.973 --rc geninfo_unexecuted_blocks=1 00:06:35.973 00:06:35.973 ' 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:35.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.973 --rc genhtml_branch_coverage=1 00:06:35.973 --rc genhtml_function_coverage=1 00:06:35.973 --rc genhtml_legend=1 00:06:35.973 --rc geninfo_all_blocks=1 00:06:35.973 --rc geninfo_unexecuted_blocks=1 00:06:35.973 00:06:35.973 ' 00:06:35.973 17:56:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:35.973 17:56:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:35.973 17:56:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:35.973 17:56:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:35.973 17:56:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:35.973 17:56:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:35.973 17:56:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.973 17:56:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58658 00:06:35.973 17:56:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58658 00:06:35.973 17:56:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58658 ']' 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:35.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:35.973 17:56:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.973 [2024-11-05 17:56:05.235949] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:35.973 [2024-11-05 17:56:05.236277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58658 ] 00:06:36.232 [2024-11-05 17:56:05.417785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.232 [2024-11-05 17:56:05.518938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.232 [2024-11-05 17:56:05.518972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.169 17:56:06 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:37.169 17:56:06 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:37.169 17:56:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:37.169 17:56:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58679 00:06:37.169 17:56:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:37.429 [ 00:06:37.429 "bdev_malloc_delete", 00:06:37.429 "bdev_malloc_create", 00:06:37.429 "bdev_null_resize", 00:06:37.429 "bdev_null_delete", 00:06:37.429 "bdev_null_create", 00:06:37.429 "bdev_nvme_cuse_unregister", 00:06:37.429 "bdev_nvme_cuse_register", 00:06:37.429 "bdev_opal_new_user", 00:06:37.429 "bdev_opal_set_lock_state", 00:06:37.429 "bdev_opal_delete", 00:06:37.429 "bdev_opal_get_info", 00:06:37.429 "bdev_opal_create", 00:06:37.429 "bdev_nvme_opal_revert", 00:06:37.429 "bdev_nvme_opal_init", 00:06:37.429 "bdev_nvme_send_cmd", 00:06:37.429 "bdev_nvme_set_keys", 00:06:37.429 "bdev_nvme_get_path_iostat", 00:06:37.429 "bdev_nvme_get_mdns_discovery_info", 00:06:37.429 "bdev_nvme_stop_mdns_discovery", 00:06:37.429 "bdev_nvme_start_mdns_discovery", 00:06:37.429 "bdev_nvme_set_multipath_policy", 00:06:37.429 "bdev_nvme_set_preferred_path", 00:06:37.429 "bdev_nvme_get_io_paths", 00:06:37.429 "bdev_nvme_remove_error_injection", 00:06:37.429 "bdev_nvme_add_error_injection", 00:06:37.429 "bdev_nvme_get_discovery_info", 00:06:37.429 "bdev_nvme_stop_discovery", 00:06:37.429 "bdev_nvme_start_discovery", 00:06:37.429 "bdev_nvme_get_controller_health_info", 00:06:37.429 "bdev_nvme_disable_controller", 00:06:37.429 "bdev_nvme_enable_controller", 00:06:37.429 "bdev_nvme_reset_controller", 00:06:37.429 "bdev_nvme_get_transport_statistics", 00:06:37.429 "bdev_nvme_apply_firmware", 00:06:37.429 "bdev_nvme_detach_controller", 00:06:37.429 "bdev_nvme_get_controllers", 00:06:37.429 "bdev_nvme_attach_controller", 00:06:37.429 "bdev_nvme_set_hotplug", 00:06:37.429 "bdev_nvme_set_options", 00:06:37.429 "bdev_passthru_delete", 00:06:37.429 "bdev_passthru_create", 00:06:37.429 "bdev_lvol_set_parent_bdev", 00:06:37.429 "bdev_lvol_set_parent", 00:06:37.429 "bdev_lvol_check_shallow_copy", 00:06:37.429 "bdev_lvol_start_shallow_copy", 00:06:37.429 "bdev_lvol_grow_lvstore", 00:06:37.429 "bdev_lvol_get_lvols", 00:06:37.429 "bdev_lvol_get_lvstores", 00:06:37.429 "bdev_lvol_delete", 00:06:37.429 "bdev_lvol_set_read_only", 00:06:37.429 "bdev_lvol_resize", 00:06:37.429 "bdev_lvol_decouple_parent", 00:06:37.429 "bdev_lvol_inflate", 00:06:37.429 "bdev_lvol_rename", 00:06:37.429 "bdev_lvol_clone_bdev", 00:06:37.429 "bdev_lvol_clone", 00:06:37.429 "bdev_lvol_snapshot", 00:06:37.429 "bdev_lvol_create", 00:06:37.429 "bdev_lvol_delete_lvstore", 00:06:37.429 "bdev_lvol_rename_lvstore", 00:06:37.429 "bdev_lvol_create_lvstore", 00:06:37.429 "bdev_raid_set_options", 00:06:37.429 "bdev_raid_remove_base_bdev", 00:06:37.429 "bdev_raid_add_base_bdev", 00:06:37.429 "bdev_raid_delete", 00:06:37.429 "bdev_raid_create", 00:06:37.429 "bdev_raid_get_bdevs", 00:06:37.429 "bdev_error_inject_error", 00:06:37.429 "bdev_error_delete", 00:06:37.429 "bdev_error_create", 00:06:37.429 "bdev_split_delete", 00:06:37.429 "bdev_split_create", 00:06:37.429 "bdev_delay_delete", 00:06:37.429 "bdev_delay_create", 00:06:37.429 "bdev_delay_update_latency", 00:06:37.429 "bdev_zone_block_delete", 00:06:37.429 "bdev_zone_block_create", 00:06:37.429 "blobfs_create", 00:06:37.429 "blobfs_detect", 00:06:37.429 "blobfs_set_cache_size", 00:06:37.429 "bdev_xnvme_delete", 00:06:37.429 "bdev_xnvme_create", 00:06:37.429 "bdev_aio_delete", 00:06:37.429 "bdev_aio_rescan", 00:06:37.429 "bdev_aio_create", 00:06:37.429 "bdev_ftl_set_property", 00:06:37.429 "bdev_ftl_get_properties", 00:06:37.429 "bdev_ftl_get_stats", 00:06:37.429 "bdev_ftl_unmap", 00:06:37.429 "bdev_ftl_unload", 00:06:37.429 "bdev_ftl_delete", 00:06:37.429 "bdev_ftl_load", 00:06:37.429 "bdev_ftl_create", 00:06:37.429 "bdev_virtio_attach_controller", 00:06:37.429 "bdev_virtio_scsi_get_devices", 00:06:37.429 "bdev_virtio_detach_controller", 00:06:37.429 "bdev_virtio_blk_set_hotplug", 00:06:37.429 "bdev_iscsi_delete", 00:06:37.429 "bdev_iscsi_create", 00:06:37.429 "bdev_iscsi_set_options", 00:06:37.429 "accel_error_inject_error", 00:06:37.429 "ioat_scan_accel_module", 00:06:37.429 "dsa_scan_accel_module", 00:06:37.429 "iaa_scan_accel_module", 00:06:37.429 "keyring_file_remove_key", 00:06:37.429 "keyring_file_add_key", 00:06:37.429 "keyring_linux_set_options", 00:06:37.429 "fsdev_aio_delete", 00:06:37.429 "fsdev_aio_create", 00:06:37.429 "iscsi_get_histogram", 00:06:37.429 "iscsi_enable_histogram", 00:06:37.429 "iscsi_set_options", 00:06:37.429 "iscsi_get_auth_groups", 00:06:37.429 "iscsi_auth_group_remove_secret", 00:06:37.429 "iscsi_auth_group_add_secret", 00:06:37.429 "iscsi_delete_auth_group", 00:06:37.429 "iscsi_create_auth_group", 00:06:37.429 "iscsi_set_discovery_auth", 00:06:37.429 "iscsi_get_options", 00:06:37.429 "iscsi_target_node_request_logout", 00:06:37.429 "iscsi_target_node_set_redirect", 00:06:37.429 "iscsi_target_node_set_auth", 00:06:37.429 "iscsi_target_node_add_lun", 00:06:37.429 "iscsi_get_stats", 00:06:37.429 "iscsi_get_connections", 00:06:37.429 "iscsi_portal_group_set_auth", 00:06:37.429 "iscsi_start_portal_group", 00:06:37.429 "iscsi_delete_portal_group", 00:06:37.429 "iscsi_create_portal_group", 00:06:37.429 "iscsi_get_portal_groups", 00:06:37.429 "iscsi_delete_target_node", 00:06:37.429 "iscsi_target_node_remove_pg_ig_maps", 00:06:37.430 "iscsi_target_node_add_pg_ig_maps", 00:06:37.430 "iscsi_create_target_node", 00:06:37.430 "iscsi_get_target_nodes", 00:06:37.430 "iscsi_delete_initiator_group", 00:06:37.430 "iscsi_initiator_group_remove_initiators", 00:06:37.430 "iscsi_initiator_group_add_initiators", 00:06:37.430 "iscsi_create_initiator_group", 00:06:37.430 "iscsi_get_initiator_groups", 00:06:37.430 "nvmf_set_crdt", 00:06:37.430 "nvmf_set_config", 00:06:37.430 "nvmf_set_max_subsystems", 00:06:37.430 "nvmf_stop_mdns_prr", 00:06:37.430 "nvmf_publish_mdns_prr", 00:06:37.430 "nvmf_subsystem_get_listeners", 00:06:37.430 "nvmf_subsystem_get_qpairs", 00:06:37.430 "nvmf_subsystem_get_controllers", 00:06:37.430 "nvmf_get_stats", 00:06:37.430 "nvmf_get_transports", 00:06:37.430 "nvmf_create_transport", 00:06:37.430 "nvmf_get_targets", 00:06:37.430 "nvmf_delete_target", 00:06:37.430 "nvmf_create_target", 00:06:37.430 "nvmf_subsystem_allow_any_host", 00:06:37.430 "nvmf_subsystem_set_keys", 00:06:37.430 "nvmf_subsystem_remove_host", 00:06:37.430 "nvmf_subsystem_add_host", 00:06:37.430 "nvmf_ns_remove_host", 00:06:37.430 "nvmf_ns_add_host", 00:06:37.430 "nvmf_subsystem_remove_ns", 00:06:37.430 "nvmf_subsystem_set_ns_ana_group", 00:06:37.430 "nvmf_subsystem_add_ns", 00:06:37.430 "nvmf_subsystem_listener_set_ana_state", 00:06:37.430 "nvmf_discovery_get_referrals", 00:06:37.430 "nvmf_discovery_remove_referral", 00:06:37.430 "nvmf_discovery_add_referral", 00:06:37.430 "nvmf_subsystem_remove_listener", 00:06:37.430 "nvmf_subsystem_add_listener", 00:06:37.430 "nvmf_delete_subsystem", 00:06:37.430 "nvmf_create_subsystem", 00:06:37.430 "nvmf_get_subsystems", 00:06:37.430 "env_dpdk_get_mem_stats", 00:06:37.430 "nbd_get_disks", 00:06:37.430 "nbd_stop_disk", 00:06:37.430 "nbd_start_disk", 00:06:37.430 "ublk_recover_disk", 00:06:37.430 "ublk_get_disks", 00:06:37.430 "ublk_stop_disk", 00:06:37.430 "ublk_start_disk", 00:06:37.430 "ublk_destroy_target", 00:06:37.430 "ublk_create_target", 00:06:37.430 "virtio_blk_create_transport", 00:06:37.430 "virtio_blk_get_transports", 00:06:37.430 "vhost_controller_set_coalescing", 00:06:37.430 "vhost_get_controllers", 00:06:37.430 "vhost_delete_controller", 00:06:37.430 "vhost_create_blk_controller", 00:06:37.430 "vhost_scsi_controller_remove_target", 00:06:37.430 "vhost_scsi_controller_add_target", 00:06:37.430 "vhost_start_scsi_controller", 00:06:37.430 "vhost_create_scsi_controller", 00:06:37.430 "thread_set_cpumask", 00:06:37.430 "scheduler_set_options", 00:06:37.430 "framework_get_governor", 00:06:37.430 "framework_get_scheduler", 00:06:37.430 "framework_set_scheduler", 00:06:37.430 "framework_get_reactors", 00:06:37.430 "thread_get_io_channels", 00:06:37.430 "thread_get_pollers", 00:06:37.430 "thread_get_stats", 00:06:37.430 "framework_monitor_context_switch", 00:06:37.430 "spdk_kill_instance", 00:06:37.430 "log_enable_timestamps", 00:06:37.430 "log_get_flags", 00:06:37.430 "log_clear_flag", 00:06:37.430 "log_set_flag", 00:06:37.430 "log_get_level", 00:06:37.430 "log_set_level", 00:06:37.430 "log_get_print_level", 00:06:37.430 "log_set_print_level", 00:06:37.430 "framework_enable_cpumask_locks", 00:06:37.430 "framework_disable_cpumask_locks", 00:06:37.430 "framework_wait_init", 00:06:37.430 "framework_start_init", 00:06:37.430 "scsi_get_devices", 00:06:37.430 "bdev_get_histogram", 00:06:37.430 "bdev_enable_histogram", 00:06:37.430 "bdev_set_qos_limit", 00:06:37.430 "bdev_set_qd_sampling_period", 00:06:37.430 "bdev_get_bdevs", 00:06:37.430 "bdev_reset_iostat", 00:06:37.430 "bdev_get_iostat", 00:06:37.430 "bdev_examine", 00:06:37.430 "bdev_wait_for_examine", 00:06:37.430 "bdev_set_options", 00:06:37.430 "accel_get_stats", 00:06:37.430 "accel_set_options", 00:06:37.430 "accel_set_driver", 00:06:37.430 "accel_crypto_key_destroy", 00:06:37.430 "accel_crypto_keys_get", 00:06:37.430 "accel_crypto_key_create", 00:06:37.430 "accel_assign_opc", 00:06:37.430 "accel_get_module_info", 00:06:37.430 "accel_get_opc_assignments", 00:06:37.430 "vmd_rescan", 00:06:37.430 "vmd_remove_device", 00:06:37.430 "vmd_enable", 00:06:37.430 "sock_get_default_impl", 00:06:37.430 "sock_set_default_impl", 00:06:37.430 "sock_impl_set_options", 00:06:37.430 "sock_impl_get_options", 00:06:37.430 "iobuf_get_stats", 00:06:37.430 "iobuf_set_options", 00:06:37.430 "keyring_get_keys", 00:06:37.430 "framework_get_pci_devices", 00:06:37.430 "framework_get_config", 00:06:37.430 "framework_get_subsystems", 00:06:37.430 "fsdev_set_opts", 00:06:37.430 "fsdev_get_opts", 00:06:37.430 "trace_get_info", 00:06:37.430 "trace_get_tpoint_group_mask", 00:06:37.430 "trace_disable_tpoint_group", 00:06:37.430 "trace_enable_tpoint_group", 00:06:37.430 "trace_clear_tpoint_mask", 00:06:37.430 "trace_set_tpoint_mask", 00:06:37.430 "notify_get_notifications", 00:06:37.430 "notify_get_types", 00:06:37.430 "spdk_get_version", 00:06:37.430 "rpc_get_methods" 00:06:37.430 ] 00:06:37.430 17:56:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:37.430 17:56:06 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.430 17:56:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.430 17:56:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:37.430 17:56:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58658 00:06:37.430 17:56:06 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58658 ']' 00:06:37.430 17:56:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58658 00:06:37.430 17:56:06 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:37.430 17:56:06 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:37.430 17:56:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58658 00:06:37.430 killing process with pid 58658 00:06:37.430 17:56:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:37.430 17:56:06 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:37.430 17:56:06 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58658' 00:06:37.430 17:56:06 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58658 00:06:37.430 17:56:06 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58658 00:06:40.061 00:06:40.061 real 0m4.063s 00:06:40.061 user 0m7.171s 00:06:40.061 sys 0m0.648s 00:06:40.061 ************************************ 00:06:40.061 END TEST spdkcli_tcp 00:06:40.061 ************************************ 00:06:40.061 17:56:08 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:40.061 17:56:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.061 17:56:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.061 17:56:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:40.061 17:56:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.061 17:56:09 -- common/autotest_common.sh@10 -- # set +x 00:06:40.061 ************************************ 00:06:40.061 START TEST dpdk_mem_utility 00:06:40.061 ************************************ 00:06:40.061 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.061 * Looking for test storage... 00:06:40.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:40.061 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:40.061 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:40.061 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:40.061 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.061 17:56:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:40.061 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.061 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:40.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.062 --rc genhtml_branch_coverage=1 00:06:40.062 --rc genhtml_function_coverage=1 00:06:40.062 --rc genhtml_legend=1 00:06:40.062 --rc geninfo_all_blocks=1 00:06:40.062 --rc geninfo_unexecuted_blocks=1 00:06:40.062 00:06:40.062 ' 00:06:40.062 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:40.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.062 --rc genhtml_branch_coverage=1 00:06:40.062 --rc genhtml_function_coverage=1 00:06:40.062 --rc genhtml_legend=1 00:06:40.062 --rc geninfo_all_blocks=1 00:06:40.062 --rc geninfo_unexecuted_blocks=1 00:06:40.062 00:06:40.062 ' 00:06:40.062 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:40.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.062 --rc genhtml_branch_coverage=1 00:06:40.062 --rc genhtml_function_coverage=1 00:06:40.062 --rc genhtml_legend=1 00:06:40.062 --rc geninfo_all_blocks=1 00:06:40.062 --rc geninfo_unexecuted_blocks=1 00:06:40.062 00:06:40.062 ' 00:06:40.062 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:40.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.062 --rc genhtml_branch_coverage=1 00:06:40.062 --rc genhtml_function_coverage=1 00:06:40.062 --rc genhtml_legend=1 00:06:40.062 --rc geninfo_all_blocks=1 00:06:40.062 --rc geninfo_unexecuted_blocks=1 00:06:40.062 00:06:40.062 ' 00:06:40.062 17:56:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:40.062 17:56:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58780 00:06:40.062 17:56:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.062 17:56:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58780 00:06:40.062 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58780 ']' 00:06:40.062 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.062 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:40.062 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.062 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:40.062 17:56:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:40.062 [2024-11-05 17:56:09.360229] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:40.062 [2024-11-05 17:56:09.360576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58780 ] 00:06:40.321 [2024-11-05 17:56:09.539936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.321 [2024-11-05 17:56:09.640982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.256 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:41.256 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:41.256 17:56:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:41.256 17:56:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:41.256 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.256 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.256 { 00:06:41.256 "filename": "/tmp/spdk_mem_dump.txt" 00:06:41.256 } 00:06:41.256 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.256 17:56:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:41.256 DPDK memory size 816.000000 MiB in 1 heap(s) 00:06:41.256 1 heaps totaling size 816.000000 MiB 00:06:41.256 size: 816.000000 MiB heap id: 0 00:06:41.256 end heaps---------- 00:06:41.256 9 mempools totaling size 595.772034 MiB 00:06:41.256 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:41.256 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:41.256 size: 92.545471 MiB name: bdev_io_58780 00:06:41.256 size: 50.003479 MiB name: msgpool_58780 00:06:41.256 size: 36.509338 MiB name: fsdev_io_58780 00:06:41.256 size: 21.763794 MiB name: PDU_Pool 00:06:41.256 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:41.256 size: 4.133484 MiB name: evtpool_58780 00:06:41.256 size: 0.026123 MiB name: Session_Pool 00:06:41.256 end mempools------- 00:06:41.256 6 memzones totaling size 4.142822 MiB 00:06:41.256 size: 1.000366 MiB name: RG_ring_0_58780 00:06:41.256 size: 1.000366 MiB name: RG_ring_1_58780 00:06:41.256 size: 1.000366 MiB name: RG_ring_4_58780 00:06:41.256 size: 1.000366 MiB name: RG_ring_5_58780 00:06:41.256 size: 0.125366 MiB name: RG_ring_2_58780 00:06:41.256 size: 0.015991 MiB name: RG_ring_3_58780 00:06:41.256 end memzones------- 00:06:41.256 17:56:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:41.517 heap id: 0 total size: 816.000000 MiB number of busy elements: 323 number of free elements: 18 00:06:41.517 list of free elements. size: 16.789429 MiB 00:06:41.517 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:41.517 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:41.517 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:41.517 element at address: 0x200018d00040 with size: 0.999939 MiB 00:06:41.517 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:41.517 element at address: 0x200019200000 with size: 0.999084 MiB 00:06:41.517 element at address: 0x200031e00000 with size: 0.994324 MiB 00:06:41.517 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:41.517 element at address: 0x200018a00000 with size: 0.959656 MiB 00:06:41.517 element at address: 0x200019500040 with size: 0.936401 MiB 00:06:41.517 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:41.517 element at address: 0x20001ac00000 with size: 0.559998 MiB 00:06:41.517 element at address: 0x200000c00000 with size: 0.490173 MiB 00:06:41.517 element at address: 0x200018e00000 with size: 0.487976 MiB 00:06:41.517 element at address: 0x200019600000 with size: 0.485413 MiB 00:06:41.517 element at address: 0x200012c00000 with size: 0.443237 MiB 00:06:41.517 element at address: 0x200028000000 with size: 0.390442 MiB 00:06:41.517 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:41.517 list of standard malloc elements. size: 199.289673 MiB 00:06:41.517 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:41.517 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:41.517 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:06:41.517 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:41.517 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:41.517 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:41.517 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:06:41.517 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:41.517 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:41.517 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:06:41.517 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:41.517 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:41.517 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012c71780 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012c71880 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012c71980 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012c72080 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012c72180 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:06:41.518 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:06:41.519 element at address: 0x200028063f40 with size: 0.000244 MiB 00:06:41.519 element at address: 0x200028064040 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806af80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806b080 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806b180 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806b280 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806b380 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806b480 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806b580 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806b680 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806b780 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806b880 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806b980 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806be80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806c080 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806c180 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806c280 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806c380 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806c480 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806c580 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806c680 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806c780 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806c880 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806c980 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806d080 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806d180 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806d280 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806d380 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806d480 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806d580 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806d680 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806d780 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806d880 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806d980 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806da80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806db80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806de80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806df80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806e080 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806e180 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806e280 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806e380 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806e480 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806e580 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806e680 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806e780 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806e880 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806e980 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:06:41.519 element at address: 0x20002806f080 with size: 0.000244 MiB 00:06:41.520 element at address: 0x20002806f180 with size: 0.000244 MiB 00:06:41.520 element at address: 0x20002806f280 with size: 0.000244 MiB 00:06:41.520 element at address: 0x20002806f380 with size: 0.000244 MiB 00:06:41.520 element at address: 0x20002806f480 with size: 0.000244 MiB 00:06:41.520 element at address: 0x20002806f580 with size: 0.000244 MiB 00:06:41.520 element at address: 0x20002806f680 with size: 0.000244 MiB 00:06:41.520 element at address: 0x20002806f780 with size: 0.000244 MiB 00:06:41.520 element at address: 0x20002806f880 with size: 0.000244 MiB 00:06:41.520 element at address: 0x20002806f980 with size: 0.000244 MiB 00:06:41.520 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:06:41.520 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:06:41.520 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:06:41.520 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:06:41.520 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:06:41.520 list of memzone associated elements. size: 599.920898 MiB 00:06:41.520 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:06:41.520 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:41.520 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:06:41.520 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:41.520 element at address: 0x200012df4740 with size: 92.045105 MiB 00:06:41.520 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58780_0 00:06:41.520 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:41.520 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58780_0 00:06:41.520 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:41.520 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58780_0 00:06:41.520 element at address: 0x2000197be900 with size: 20.255615 MiB 00:06:41.520 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:41.520 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:06:41.520 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:41.520 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:41.520 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58780_0 00:06:41.520 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:41.520 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58780 00:06:41.520 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:41.520 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58780 00:06:41.520 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:41.520 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:41.520 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:06:41.520 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:41.520 element at address: 0x200018afde00 with size: 1.008179 MiB 00:06:41.520 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:41.520 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:06:41.520 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:41.520 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:41.520 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58780 00:06:41.520 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:41.520 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58780 00:06:41.520 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:06:41.520 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58780 00:06:41.520 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:06:41.520 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58780 00:06:41.520 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:41.520 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58780 00:06:41.520 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:41.520 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58780 00:06:41.520 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:06:41.520 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:41.520 element at address: 0x200012c72280 with size: 0.500549 MiB 00:06:41.520 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:41.520 element at address: 0x20001967c440 with size: 0.250549 MiB 00:06:41.520 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:41.520 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:41.520 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58780 00:06:41.520 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:41.520 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58780 00:06:41.520 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:06:41.520 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:41.520 element at address: 0x200028064140 with size: 0.023804 MiB 00:06:41.520 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:41.520 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:41.520 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58780 00:06:41.520 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:06:41.520 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:41.520 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:41.520 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58780 00:06:41.520 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:41.520 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58780 00:06:41.520 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:41.520 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58780 00:06:41.520 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:06:41.520 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:41.520 17:56:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:41.520 17:56:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58780 00:06:41.520 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58780 ']' 00:06:41.520 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58780 00:06:41.520 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:41.520 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:41.520 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58780 00:06:41.520 killing process with pid 58780 00:06:41.520 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:41.520 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:41.520 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58780' 00:06:41.520 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58780 00:06:41.520 17:56:10 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58780 00:06:44.075 00:06:44.075 real 0m3.920s 00:06:44.075 user 0m3.787s 00:06:44.075 sys 0m0.619s 00:06:44.075 17:56:12 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.075 ************************************ 00:06:44.075 END TEST dpdk_mem_utility 00:06:44.075 ************************************ 00:06:44.075 17:56:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:44.075 17:56:13 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:44.075 17:56:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.075 17:56:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.075 17:56:13 -- common/autotest_common.sh@10 -- # set +x 00:06:44.075 ************************************ 00:06:44.075 START TEST event 00:06:44.075 ************************************ 00:06:44.075 17:56:13 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:44.075 * Looking for test storage... 00:06:44.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:44.075 17:56:13 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:44.075 17:56:13 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:44.075 17:56:13 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:44.075 17:56:13 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:44.075 17:56:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.075 17:56:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.075 17:56:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.075 17:56:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.075 17:56:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.075 17:56:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.075 17:56:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.075 17:56:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.075 17:56:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.075 17:56:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.075 17:56:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.075 17:56:13 event -- scripts/common.sh@344 -- # case "$op" in 00:06:44.075 17:56:13 event -- scripts/common.sh@345 -- # : 1 00:06:44.075 17:56:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.075 17:56:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.075 17:56:13 event -- scripts/common.sh@365 -- # decimal 1 00:06:44.075 17:56:13 event -- scripts/common.sh@353 -- # local d=1 00:06:44.075 17:56:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.075 17:56:13 event -- scripts/common.sh@355 -- # echo 1 00:06:44.075 17:56:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.075 17:56:13 event -- scripts/common.sh@366 -- # decimal 2 00:06:44.075 17:56:13 event -- scripts/common.sh@353 -- # local d=2 00:06:44.075 17:56:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.075 17:56:13 event -- scripts/common.sh@355 -- # echo 2 00:06:44.075 17:56:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.075 17:56:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.075 17:56:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.075 17:56:13 event -- scripts/common.sh@368 -- # return 0 00:06:44.075 17:56:13 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.075 17:56:13 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:44.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.075 --rc genhtml_branch_coverage=1 00:06:44.075 --rc genhtml_function_coverage=1 00:06:44.075 --rc genhtml_legend=1 00:06:44.075 --rc geninfo_all_blocks=1 00:06:44.075 --rc geninfo_unexecuted_blocks=1 00:06:44.075 00:06:44.075 ' 00:06:44.075 17:56:13 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:44.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.075 --rc genhtml_branch_coverage=1 00:06:44.075 --rc genhtml_function_coverage=1 00:06:44.075 --rc genhtml_legend=1 00:06:44.075 --rc geninfo_all_blocks=1 00:06:44.075 --rc geninfo_unexecuted_blocks=1 00:06:44.075 00:06:44.075 ' 00:06:44.075 17:56:13 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:44.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.075 --rc genhtml_branch_coverage=1 00:06:44.075 --rc genhtml_function_coverage=1 00:06:44.075 --rc genhtml_legend=1 00:06:44.075 --rc geninfo_all_blocks=1 00:06:44.075 --rc geninfo_unexecuted_blocks=1 00:06:44.075 00:06:44.075 ' 00:06:44.075 17:56:13 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:44.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.075 --rc genhtml_branch_coverage=1 00:06:44.075 --rc genhtml_function_coverage=1 00:06:44.075 --rc genhtml_legend=1 00:06:44.075 --rc geninfo_all_blocks=1 00:06:44.075 --rc geninfo_unexecuted_blocks=1 00:06:44.075 00:06:44.075 ' 00:06:44.075 17:56:13 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:44.075 17:56:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:44.075 17:56:13 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:44.075 17:56:13 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:44.075 17:56:13 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.075 17:56:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.075 ************************************ 00:06:44.075 START TEST event_perf 00:06:44.075 ************************************ 00:06:44.075 17:56:13 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:44.075 Running I/O for 1 seconds...[2024-11-05 17:56:13.320026] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:44.076 [2024-11-05 17:56:13.320262] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58888 ] 00:06:44.335 [2024-11-05 17:56:13.501461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.335 [2024-11-05 17:56:13.615361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.335 Running I/O for 1 seconds...[2024-11-05 17:56:13.615527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.335 [2024-11-05 17:56:13.615673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.335 [2024-11-05 17:56:13.615756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:45.714 00:06:45.714 lcore 0: 203664 00:06:45.714 lcore 1: 203663 00:06:45.714 lcore 2: 203664 00:06:45.714 lcore 3: 203664 00:06:45.714 done. 00:06:45.714 00:06:45.714 real 0m1.577s 00:06:45.714 user 0m4.322s 00:06:45.714 sys 0m0.134s 00:06:45.714 17:56:14 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.714 17:56:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.714 ************************************ 00:06:45.714 END TEST event_perf 00:06:45.714 ************************************ 00:06:45.714 17:56:14 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:45.714 17:56:14 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:45.714 17:56:14 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.714 17:56:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.714 ************************************ 00:06:45.714 START TEST event_reactor 00:06:45.714 ************************************ 00:06:45.714 17:56:14 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:45.714 [2024-11-05 17:56:14.975313] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:45.714 [2024-11-05 17:56:14.975438] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58928 ] 00:06:45.973 [2024-11-05 17:56:15.158140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.973 [2024-11-05 17:56:15.269348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.353 test_start 00:06:47.353 oneshot 00:06:47.353 tick 100 00:06:47.353 tick 100 00:06:47.353 tick 250 00:06:47.353 tick 100 00:06:47.353 tick 100 00:06:47.353 tick 100 00:06:47.353 tick 250 00:06:47.353 tick 500 00:06:47.353 tick 100 00:06:47.353 tick 100 00:06:47.353 tick 250 00:06:47.353 tick 100 00:06:47.353 tick 100 00:06:47.353 test_end 00:06:47.353 00:06:47.353 real 0m1.567s 00:06:47.353 user 0m1.358s 00:06:47.353 sys 0m0.100s 00:06:47.353 17:56:16 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.353 17:56:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:47.353 ************************************ 00:06:47.353 END TEST event_reactor 00:06:47.353 ************************************ 00:06:47.353 17:56:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:47.353 17:56:16 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:47.353 17:56:16 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.353 17:56:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.353 ************************************ 00:06:47.353 START TEST event_reactor_perf 00:06:47.353 ************************************ 00:06:47.353 17:56:16 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:47.353 [2024-11-05 17:56:16.621183] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:47.353 [2024-11-05 17:56:16.621289] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58964 ] 00:06:47.612 [2024-11-05 17:56:16.800470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.613 [2024-11-05 17:56:16.903670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.991 test_start 00:06:48.991 test_end 00:06:48.991 Performance: 394043 events per second 00:06:48.991 00:06:48.991 real 0m1.550s 00:06:48.991 user 0m1.344s 00:06:48.991 sys 0m0.097s 00:06:48.991 ************************************ 00:06:48.991 END TEST event_reactor_perf 00:06:48.991 ************************************ 00:06:48.991 17:56:18 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.991 17:56:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:48.991 17:56:18 event -- event/event.sh@49 -- # uname -s 00:06:48.991 17:56:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:48.991 17:56:18 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:48.991 17:56:18 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.991 17:56:18 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.991 17:56:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:48.991 ************************************ 00:06:48.991 START TEST event_scheduler 00:06:48.991 ************************************ 00:06:48.991 17:56:18 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:49.251 * Looking for test storage... 00:06:49.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.251 17:56:18 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:49.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.251 --rc genhtml_branch_coverage=1 00:06:49.251 --rc genhtml_function_coverage=1 00:06:49.251 --rc genhtml_legend=1 00:06:49.251 --rc geninfo_all_blocks=1 00:06:49.251 --rc geninfo_unexecuted_blocks=1 00:06:49.251 00:06:49.251 ' 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:49.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.251 --rc genhtml_branch_coverage=1 00:06:49.251 --rc genhtml_function_coverage=1 00:06:49.251 --rc genhtml_legend=1 00:06:49.251 --rc geninfo_all_blocks=1 00:06:49.251 --rc geninfo_unexecuted_blocks=1 00:06:49.251 00:06:49.251 ' 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:49.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.251 --rc genhtml_branch_coverage=1 00:06:49.251 --rc genhtml_function_coverage=1 00:06:49.251 --rc genhtml_legend=1 00:06:49.251 --rc geninfo_all_blocks=1 00:06:49.251 --rc geninfo_unexecuted_blocks=1 00:06:49.251 00:06:49.251 ' 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:49.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.251 --rc genhtml_branch_coverage=1 00:06:49.251 --rc genhtml_function_coverage=1 00:06:49.251 --rc genhtml_legend=1 00:06:49.251 --rc geninfo_all_blocks=1 00:06:49.251 --rc geninfo_unexecuted_blocks=1 00:06:49.251 00:06:49.251 ' 00:06:49.251 17:56:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:49.251 17:56:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59040 00:06:49.251 17:56:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:49.251 17:56:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.251 17:56:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59040 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 59040 ']' 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:49.251 17:56:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:49.251 [2024-11-05 17:56:18.516233] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:49.251 [2024-11-05 17:56:18.516384] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59040 ] 00:06:49.511 [2024-11-05 17:56:18.696917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.511 [2024-11-05 17:56:18.813516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.511 [2024-11-05 17:56:18.813679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.511 [2024-11-05 17:56:18.813866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.511 [2024-11-05 17:56:18.813876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.080 17:56:19 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:50.080 17:56:19 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:06:50.080 17:56:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:50.080 17:56:19 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.080 17:56:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.080 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:50.080 POWER: Cannot set governor of lcore 0 to userspace 00:06:50.080 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:50.080 POWER: Cannot set governor of lcore 0 to performance 00:06:50.080 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:50.080 POWER: Cannot set governor of lcore 0 to userspace 00:06:50.080 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:50.080 POWER: Cannot set governor of lcore 0 to userspace 00:06:50.080 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:50.080 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:50.080 POWER: Unable to set Power Management Environment for lcore 0 00:06:50.080 [2024-11-05 17:56:19.347158] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:50.080 [2024-11-05 17:56:19.347253] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:50.080 [2024-11-05 17:56:19.347337] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:50.080 [2024-11-05 17:56:19.347430] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:50.080 [2024-11-05 17:56:19.347470] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:50.080 [2024-11-05 17:56:19.347502] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:50.080 17:56:19 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.080 17:56:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:50.080 17:56:19 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.080 17:56:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.339 [2024-11-05 17:56:19.658058] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:50.340 17:56:19 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.340 17:56:19 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:50.340 17:56:19 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.340 17:56:19 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.340 17:56:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:50.599 ************************************ 00:06:50.599 START TEST scheduler_create_thread 00:06:50.599 ************************************ 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.599 2 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.599 3 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.599 4 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.599 5 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.599 6 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.599 7 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.599 8 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.599 9 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.599 10 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.599 17:56:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.538 17:56:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.538 17:56:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:51.538 17:56:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:51.538 17:56:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.538 17:56:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.936 ************************************ 00:06:52.936 END TEST scheduler_create_thread 00:06:52.936 ************************************ 00:06:52.936 17:56:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:52.936 00:06:52.936 real 0m2.135s 00:06:52.936 user 0m0.027s 00:06:52.936 sys 0m0.006s 00:06:52.936 17:56:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.936 17:56:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.936 17:56:21 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:52.936 17:56:21 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59040 00:06:52.936 17:56:21 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 59040 ']' 00:06:52.936 17:56:21 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 59040 00:06:52.936 17:56:21 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:06:52.936 17:56:21 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:52.936 17:56:21 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59040 00:06:52.936 killing process with pid 59040 00:06:52.936 17:56:21 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:52.936 17:56:21 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:52.936 17:56:21 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59040' 00:06:52.936 17:56:21 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 59040 00:06:52.936 17:56:21 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 59040 00:06:53.212 [2024-11-05 17:56:22.287284] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:54.169 00:06:54.169 real 0m5.224s 00:06:54.169 user 0m8.607s 00:06:54.169 sys 0m0.531s 00:06:54.169 ************************************ 00:06:54.169 END TEST event_scheduler 00:06:54.169 ************************************ 00:06:54.169 17:56:23 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.169 17:56:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.433 17:56:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:54.433 17:56:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:54.433 17:56:23 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:54.433 17:56:23 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.433 17:56:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.433 ************************************ 00:06:54.433 START TEST app_repeat 00:06:54.433 ************************************ 00:06:54.433 17:56:23 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:06:54.433 17:56:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.433 17:56:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.433 17:56:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:54.433 17:56:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:54.433 17:56:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:54.433 17:56:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:54.433 17:56:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:54.433 17:56:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59141 00:06:54.433 17:56:23 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:54.433 17:56:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.433 Process app_repeat pid: 59141 00:06:54.433 17:56:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59141' 00:06:54.433 spdk_app_start Round 0 00:06:54.433 17:56:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:54.433 17:56:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:54.433 17:56:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59141 /var/tmp/spdk-nbd.sock 00:06:54.433 17:56:23 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59141 ']' 00:06:54.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.433 17:56:23 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.433 17:56:23 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:54.433 17:56:23 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.433 17:56:23 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:54.433 17:56:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.433 [2024-11-05 17:56:23.581215] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:06:54.433 [2024-11-05 17:56:23.581523] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59141 ] 00:06:54.693 [2024-11-05 17:56:23.763951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.693 [2024-11-05 17:56:23.873496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.693 [2024-11-05 17:56:23.873524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.263 17:56:24 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:55.263 17:56:24 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:55.263 17:56:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.523 Malloc0 00:06:55.523 17:56:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.782 Malloc1 00:06:55.782 17:56:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.782 17:56:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:56.041 /dev/nbd0 00:06:56.041 17:56:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.041 17:56:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.041 17:56:25 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:56.041 17:56:25 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:56.041 17:56:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:56.041 17:56:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:56.041 17:56:25 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:56.041 17:56:25 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:56.041 17:56:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:56.041 17:56:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:56.041 17:56:25 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.041 1+0 records in 00:06:56.041 1+0 records out 00:06:56.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402214 s, 10.2 MB/s 00:06:56.041 17:56:25 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.041 17:56:25 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:56.041 17:56:25 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.041 17:56:25 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:56.041 17:56:25 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:56.041 17:56:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.041 17:56:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.041 17:56:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:56.300 /dev/nbd1 00:06:56.300 17:56:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.300 17:56:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.300 17:56:25 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:56.300 17:56:25 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:56.300 17:56:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:56.300 17:56:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:56.300 17:56:25 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:56.300 17:56:25 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:56.300 17:56:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:56.300 17:56:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:56.300 17:56:25 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.300 1+0 records in 00:06:56.300 1+0 records out 00:06:56.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382575 s, 10.7 MB/s 00:06:56.300 17:56:25 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.300 17:56:25 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:56.300 17:56:25 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.300 17:56:25 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:56.300 17:56:25 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:56.300 17:56:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.300 17:56:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.300 17:56:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.300 17:56:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.300 17:56:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.560 { 00:06:56.560 "nbd_device": "/dev/nbd0", 00:06:56.560 "bdev_name": "Malloc0" 00:06:56.560 }, 00:06:56.560 { 00:06:56.560 "nbd_device": "/dev/nbd1", 00:06:56.560 "bdev_name": "Malloc1" 00:06:56.560 } 00:06:56.560 ]' 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.560 { 00:06:56.560 "nbd_device": "/dev/nbd0", 00:06:56.560 "bdev_name": "Malloc0" 00:06:56.560 }, 00:06:56.560 { 00:06:56.560 "nbd_device": "/dev/nbd1", 00:06:56.560 "bdev_name": "Malloc1" 00:06:56.560 } 00:06:56.560 ]' 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.560 /dev/nbd1' 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.560 /dev/nbd1' 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:56.560 256+0 records in 00:06:56.560 256+0 records out 00:06:56.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115475 s, 90.8 MB/s 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.560 256+0 records in 00:06:56.560 256+0 records out 00:06:56.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284364 s, 36.9 MB/s 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.560 256+0 records in 00:06:56.560 256+0 records out 00:06:56.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311485 s, 33.7 MB/s 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.560 17:56:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.821 17:56:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.821 17:56:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.821 17:56:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.821 17:56:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.821 17:56:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.821 17:56:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.821 17:56:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.821 17:56:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.821 17:56:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.821 17:56:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.081 17:56:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.081 17:56:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.081 17:56:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.081 17:56:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.081 17:56:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.081 17:56:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.081 17:56:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.081 17:56:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.081 17:56:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.081 17:56:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.081 17:56:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.340 17:56:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.340 17:56:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.340 17:56:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.340 17:56:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.340 17:56:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.340 17:56:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.340 17:56:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.340 17:56:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.340 17:56:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.340 17:56:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.340 17:56:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.340 17:56:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.340 17:56:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:57.909 17:56:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:58.848 [2024-11-05 17:56:28.098715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.107 [2024-11-05 17:56:28.203537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.107 [2024-11-05 17:56:28.203538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.107 [2024-11-05 17:56:28.392581] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:59.107 [2024-11-05 17:56:28.392685] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:01.016 17:56:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:01.016 spdk_app_start Round 1 00:07:01.016 17:56:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:01.016 17:56:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59141 /var/tmp/spdk-nbd.sock 00:07:01.016 17:56:29 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59141 ']' 00:07:01.016 17:56:29 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.016 17:56:29 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:01.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.016 17:56:29 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.016 17:56:29 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:01.016 17:56:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.016 17:56:30 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:01.016 17:56:30 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:01.016 17:56:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.275 Malloc0 00:07:01.275 17:56:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.534 Malloc1 00:07:01.534 17:56:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.534 17:56:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.534 17:56:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.534 17:56:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.534 17:56:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.534 17:56:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.534 17:56:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.534 17:56:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.534 17:56:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.534 17:56:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.534 17:56:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.534 17:56:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.534 17:56:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.535 17:56:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.535 17:56:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.535 17:56:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:01.794 /dev/nbd0 00:07:01.794 17:56:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.794 17:56:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.794 17:56:30 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:01.794 17:56:30 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:01.794 17:56:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:01.794 17:56:30 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:01.794 17:56:30 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:01.794 17:56:30 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:01.794 17:56:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:01.794 17:56:30 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:01.794 17:56:30 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.794 1+0 records in 00:07:01.794 1+0 records out 00:07:01.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375752 s, 10.9 MB/s 00:07:01.794 17:56:30 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.794 17:56:30 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:01.794 17:56:30 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.794 17:56:30 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:01.794 17:56:30 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:01.794 17:56:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.794 17:56:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.794 17:56:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:02.053 /dev/nbd1 00:07:02.053 17:56:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:02.053 17:56:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:02.053 17:56:31 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:02.053 17:56:31 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:02.053 17:56:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:02.053 17:56:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:02.053 17:56:31 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:02.053 17:56:31 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:02.053 17:56:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:02.053 17:56:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:02.053 17:56:31 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:02.053 1+0 records in 00:07:02.053 1+0 records out 00:07:02.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336739 s, 12.2 MB/s 00:07:02.053 17:56:31 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:02.053 17:56:31 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:02.053 17:56:31 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:02.053 17:56:31 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:02.053 17:56:31 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:02.053 17:56:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:02.053 17:56:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.053 17:56:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.053 17:56:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.053 17:56:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.313 { 00:07:02.313 "nbd_device": "/dev/nbd0", 00:07:02.313 "bdev_name": "Malloc0" 00:07:02.313 }, 00:07:02.313 { 00:07:02.313 "nbd_device": "/dev/nbd1", 00:07:02.313 "bdev_name": "Malloc1" 00:07:02.313 } 00:07:02.313 ]' 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.313 { 00:07:02.313 "nbd_device": "/dev/nbd0", 00:07:02.313 "bdev_name": "Malloc0" 00:07:02.313 }, 00:07:02.313 { 00:07:02.313 "nbd_device": "/dev/nbd1", 00:07:02.313 "bdev_name": "Malloc1" 00:07:02.313 } 00:07:02.313 ]' 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.313 /dev/nbd1' 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.313 /dev/nbd1' 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:02.313 256+0 records in 00:07:02.313 256+0 records out 00:07:02.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536337 s, 196 MB/s 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.313 256+0 records in 00:07:02.313 256+0 records out 00:07:02.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298662 s, 35.1 MB/s 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.313 256+0 records in 00:07:02.313 256+0 records out 00:07:02.313 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296829 s, 35.3 MB/s 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.313 17:56:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.572 17:56:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.572 17:56:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.572 17:56:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.572 17:56:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.572 17:56:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.572 17:56:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.572 17:56:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.572 17:56:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.572 17:56:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.572 17:56:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.831 17:56:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.831 17:56:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.831 17:56:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.831 17:56:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.831 17:56:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.831 17:56:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.831 17:56:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.831 17:56:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.831 17:56:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.831 17:56:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.831 17:56:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.093 17:56:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.093 17:56:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.093 17:56:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.093 17:56:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.093 17:56:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.093 17:56:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.093 17:56:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:03.093 17:56:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.093 17:56:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.093 17:56:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.093 17:56:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.093 17:56:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.093 17:56:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:03.398 17:56:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:04.777 [2024-11-05 17:56:33.764553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.777 [2024-11-05 17:56:33.868715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.777 [2024-11-05 17:56:33.868734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.777 [2024-11-05 17:56:34.061038] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:04.777 [2024-11-05 17:56:34.061120] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.683 17:56:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:06.683 spdk_app_start Round 2 00:07:06.683 17:56:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:06.683 17:56:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59141 /var/tmp/spdk-nbd.sock 00:07:06.683 17:56:35 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59141 ']' 00:07:06.683 17:56:35 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.683 17:56:35 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:06.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.683 17:56:35 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.683 17:56:35 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:06.683 17:56:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.683 17:56:35 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:06.683 17:56:35 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:06.683 17:56:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.942 Malloc0 00:07:06.942 17:56:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.201 Malloc1 00:07:07.201 17:56:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.201 17:56:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:07.461 /dev/nbd0 00:07:07.461 17:56:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.461 17:56:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.461 17:56:36 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:07.461 17:56:36 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:07.461 17:56:36 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:07.461 17:56:36 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:07.461 17:56:36 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:07.461 17:56:36 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:07.461 17:56:36 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:07.461 17:56:36 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:07.461 17:56:36 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.461 1+0 records in 00:07:07.461 1+0 records out 00:07:07.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203466 s, 20.1 MB/s 00:07:07.461 17:56:36 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.461 17:56:36 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:07.461 17:56:36 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.461 17:56:36 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:07.461 17:56:36 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:07.461 17:56:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.461 17:56:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.461 17:56:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:07.722 /dev/nbd1 00:07:07.722 17:56:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:07.722 17:56:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:07.722 17:56:36 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:07.722 17:56:36 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:07.722 17:56:36 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:07.722 17:56:36 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:07.722 17:56:36 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:07.722 17:56:36 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:07.722 17:56:36 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:07.722 17:56:36 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:07.722 17:56:36 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.722 1+0 records in 00:07:07.722 1+0 records out 00:07:07.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019675 s, 20.8 MB/s 00:07:07.722 17:56:36 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.722 17:56:36 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:07.722 17:56:36 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.722 17:56:36 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:07.722 17:56:36 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:07.722 17:56:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.722 17:56:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.722 17:56:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.722 17:56:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.722 17:56:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.981 { 00:07:07.981 "nbd_device": "/dev/nbd0", 00:07:07.981 "bdev_name": "Malloc0" 00:07:07.981 }, 00:07:07.981 { 00:07:07.981 "nbd_device": "/dev/nbd1", 00:07:07.981 "bdev_name": "Malloc1" 00:07:07.981 } 00:07:07.981 ]' 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.981 { 00:07:07.981 "nbd_device": "/dev/nbd0", 00:07:07.981 "bdev_name": "Malloc0" 00:07:07.981 }, 00:07:07.981 { 00:07:07.981 "nbd_device": "/dev/nbd1", 00:07:07.981 "bdev_name": "Malloc1" 00:07:07.981 } 00:07:07.981 ]' 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:07.981 /dev/nbd1' 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:07.981 /dev/nbd1' 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:07.981 256+0 records in 00:07:07.981 256+0 records out 00:07:07.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503488 s, 208 MB/s 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:07.981 256+0 records in 00:07:07.981 256+0 records out 00:07:07.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275954 s, 38.0 MB/s 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:07.981 256+0 records in 00:07:07.981 256+0 records out 00:07:07.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295549 s, 35.5 MB/s 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.981 17:56:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:08.241 17:56:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.241 17:56:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.241 17:56:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.241 17:56:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.241 17:56:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.241 17:56:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.241 17:56:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.241 17:56:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.241 17:56:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.241 17:56:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:08.500 17:56:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:08.500 17:56:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:08.500 17:56:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:08.500 17:56:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.500 17:56:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.500 17:56:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:08.500 17:56:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.500 17:56:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.500 17:56:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.500 17:56:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.500 17:56:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.759 17:56:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.759 17:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.759 17:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.759 17:56:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.759 17:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.759 17:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.759 17:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:08.759 17:56:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.759 17:56:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.759 17:56:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:08.759 17:56:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:08.759 17:56:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:08.759 17:56:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:09.327 17:56:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:10.265 [2024-11-05 17:56:39.481594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.265 [2024-11-05 17:56:39.585554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.265 [2024-11-05 17:56:39.585555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.523 [2024-11-05 17:56:39.776590] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:10.523 [2024-11-05 17:56:39.776669] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:12.430 17:56:41 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59141 /var/tmp/spdk-nbd.sock 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59141 ']' 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:12.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:12.430 17:56:41 event.app_repeat -- event/event.sh@39 -- # killprocess 59141 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 59141 ']' 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 59141 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59141 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:12.430 killing process with pid 59141 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59141' 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@971 -- # kill 59141 00:07:12.430 17:56:41 event.app_repeat -- common/autotest_common.sh@976 -- # wait 59141 00:07:13.371 spdk_app_start is called in Round 0. 00:07:13.371 Shutdown signal received, stop current app iteration 00:07:13.371 Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 reinitialization... 00:07:13.371 spdk_app_start is called in Round 1. 00:07:13.371 Shutdown signal received, stop current app iteration 00:07:13.371 Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 reinitialization... 00:07:13.371 spdk_app_start is called in Round 2. 00:07:13.371 Shutdown signal received, stop current app iteration 00:07:13.371 Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 reinitialization... 00:07:13.371 spdk_app_start is called in Round 3. 00:07:13.371 Shutdown signal received, stop current app iteration 00:07:13.371 17:56:42 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:13.371 17:56:42 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:13.371 00:07:13.371 real 0m19.125s 00:07:13.371 user 0m40.628s 00:07:13.371 sys 0m3.019s 00:07:13.371 17:56:42 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:13.371 17:56:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:13.371 ************************************ 00:07:13.371 END TEST app_repeat 00:07:13.371 ************************************ 00:07:13.631 17:56:42 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:13.631 17:56:42 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:13.631 17:56:42 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:13.631 17:56:42 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.631 17:56:42 event -- common/autotest_common.sh@10 -- # set +x 00:07:13.631 ************************************ 00:07:13.631 START TEST cpu_locks 00:07:13.631 ************************************ 00:07:13.631 17:56:42 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:13.631 * Looking for test storage... 00:07:13.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:13.631 17:56:42 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:13.631 17:56:42 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:07:13.631 17:56:42 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:13.631 17:56:42 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:13.631 17:56:42 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.632 17:56:42 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:13.632 17:56:42 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.632 17:56:42 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:13.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.632 --rc genhtml_branch_coverage=1 00:07:13.632 --rc genhtml_function_coverage=1 00:07:13.632 --rc genhtml_legend=1 00:07:13.632 --rc geninfo_all_blocks=1 00:07:13.632 --rc geninfo_unexecuted_blocks=1 00:07:13.632 00:07:13.632 ' 00:07:13.632 17:56:42 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:13.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.632 --rc genhtml_branch_coverage=1 00:07:13.632 --rc genhtml_function_coverage=1 00:07:13.632 --rc genhtml_legend=1 00:07:13.632 --rc geninfo_all_blocks=1 00:07:13.632 --rc geninfo_unexecuted_blocks=1 00:07:13.632 00:07:13.632 ' 00:07:13.632 17:56:42 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:13.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.632 --rc genhtml_branch_coverage=1 00:07:13.632 --rc genhtml_function_coverage=1 00:07:13.632 --rc genhtml_legend=1 00:07:13.632 --rc geninfo_all_blocks=1 00:07:13.632 --rc geninfo_unexecuted_blocks=1 00:07:13.632 00:07:13.632 ' 00:07:13.632 17:56:42 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:13.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.632 --rc genhtml_branch_coverage=1 00:07:13.632 --rc genhtml_function_coverage=1 00:07:13.632 --rc genhtml_legend=1 00:07:13.632 --rc geninfo_all_blocks=1 00:07:13.632 --rc geninfo_unexecuted_blocks=1 00:07:13.632 00:07:13.632 ' 00:07:13.632 17:56:42 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:13.632 17:56:42 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:13.632 17:56:42 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:13.632 17:56:42 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:13.632 17:56:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:13.632 17:56:42 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:13.632 17:56:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.632 ************************************ 00:07:13.632 START TEST default_locks 00:07:13.632 ************************************ 00:07:13.632 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:07:13.632 17:56:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59588 00:07:13.632 17:56:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:13.632 17:56:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59588 00:07:13.632 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59588 ']' 00:07:13.632 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.632 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:13.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.632 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.632 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:13.632 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.891 [2024-11-05 17:56:43.057459] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:13.891 [2024-11-05 17:56:43.057594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59588 ] 00:07:14.151 [2024-11-05 17:56:43.237855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.151 [2024-11-05 17:56:43.343775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.111 17:56:44 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:15.111 17:56:44 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:07:15.111 17:56:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59588 00:07:15.111 17:56:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.111 17:56:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59588 00:07:15.370 17:56:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59588 00:07:15.370 17:56:44 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 59588 ']' 00:07:15.370 17:56:44 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 59588 00:07:15.370 17:56:44 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:07:15.370 17:56:44 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:15.370 17:56:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59588 00:07:15.370 17:56:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:15.370 17:56:44 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:15.370 killing process with pid 59588 00:07:15.370 17:56:44 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59588' 00:07:15.370 17:56:44 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 59588 00:07:15.370 17:56:44 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 59588 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59588 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59588 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59588 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59588 ']' 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:17.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.905 ERROR: process (pid: 59588) is no longer running 00:07:17.905 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59588) - No such process 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:17.905 00:07:17.905 real 0m4.042s 00:07:17.905 user 0m3.989s 00:07:17.905 sys 0m0.704s 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:17.905 ************************************ 00:07:17.905 17:56:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.905 END TEST default_locks 00:07:17.905 ************************************ 00:07:17.905 17:56:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:17.905 17:56:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:17.905 17:56:47 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:17.905 17:56:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.905 ************************************ 00:07:17.905 START TEST default_locks_via_rpc 00:07:17.905 ************************************ 00:07:17.905 17:56:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:07:17.906 17:56:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59663 00:07:17.906 17:56:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.906 17:56:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59663 00:07:17.906 17:56:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59663 ']' 00:07:17.906 17:56:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.906 17:56:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:17.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.906 17:56:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.906 17:56:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:17.906 17:56:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.906 [2024-11-05 17:56:47.176138] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:17.906 [2024-11-05 17:56:47.176269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59663 ] 00:07:18.165 [2024-11-05 17:56:47.354880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.165 [2024-11-05 17:56:47.463023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59663 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59663 00:07:19.103 17:56:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.362 17:56:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59663 00:07:19.362 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59663 ']' 00:07:19.362 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59663 00:07:19.362 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:07:19.362 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:19.621 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59663 00:07:19.621 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:19.621 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:19.621 killing process with pid 59663 00:07:19.621 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59663' 00:07:19.621 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59663 00:07:19.621 17:56:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59663 00:07:22.186 00:07:22.186 real 0m3.975s 00:07:22.186 user 0m3.898s 00:07:22.186 sys 0m0.666s 00:07:22.186 17:56:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.186 ************************************ 00:07:22.186 END TEST default_locks_via_rpc 00:07:22.186 ************************************ 00:07:22.186 17:56:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.186 17:56:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:22.186 17:56:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:22.186 17:56:51 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.186 17:56:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.186 ************************************ 00:07:22.186 START TEST non_locking_app_on_locked_coremask 00:07:22.186 ************************************ 00:07:22.186 17:56:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:07:22.186 17:56:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59732 00:07:22.186 17:56:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59732 /var/tmp/spdk.sock 00:07:22.186 17:56:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.186 17:56:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59732 ']' 00:07:22.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.186 17:56:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.186 17:56:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:22.186 17:56:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.186 17:56:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:22.186 17:56:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.186 [2024-11-05 17:56:51.227497] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:22.186 [2024-11-05 17:56:51.227621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59732 ] 00:07:22.186 [2024-11-05 17:56:51.393331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.186 [2024-11-05 17:56:51.500609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.157 17:56:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:23.157 17:56:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:23.157 17:56:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59753 00:07:23.157 17:56:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59753 /var/tmp/spdk2.sock 00:07:23.157 17:56:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:23.157 17:56:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59753 ']' 00:07:23.157 17:56:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.157 17:56:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:23.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.157 17:56:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.157 17:56:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:23.157 17:56:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:23.157 [2024-11-05 17:56:52.427586] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:23.157 [2024-11-05 17:56:52.427737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59753 ] 00:07:23.417 [2024-11-05 17:56:52.611658] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:23.417 [2024-11-05 17:56:52.611708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.676 [2024-11-05 17:56:52.828284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.215 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:26.215 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:26.215 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59732 00:07:26.215 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59732 00:07:26.215 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:26.783 17:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59732 00:07:26.783 17:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59732 ']' 00:07:26.783 17:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59732 00:07:26.783 17:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:26.783 17:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:26.783 17:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59732 00:07:26.783 killing process with pid 59732 00:07:26.783 17:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:26.783 17:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:26.783 17:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59732' 00:07:26.783 17:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59732 00:07:26.783 17:56:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59732 00:07:32.058 17:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59753 00:07:32.058 17:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59753 ']' 00:07:32.058 17:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59753 00:07:32.058 17:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:32.058 17:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:32.058 17:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59753 00:07:32.058 17:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:32.058 17:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:32.058 killing process with pid 59753 00:07:32.058 17:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59753' 00:07:32.058 17:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59753 00:07:32.058 17:57:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59753 00:07:33.999 ************************************ 00:07:33.999 END TEST non_locking_app_on_locked_coremask 00:07:33.999 ************************************ 00:07:33.999 00:07:33.999 real 0m11.877s 00:07:33.999 user 0m12.131s 00:07:33.999 sys 0m1.491s 00:07:33.999 17:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:33.999 17:57:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.999 17:57:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:33.999 17:57:03 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:33.999 17:57:03 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:33.999 17:57:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:33.999 ************************************ 00:07:33.999 START TEST locking_app_on_unlocked_coremask 00:07:33.999 ************************************ 00:07:33.999 17:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:07:33.999 17:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59906 00:07:33.999 17:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:33.999 17:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59906 /var/tmp/spdk.sock 00:07:33.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.999 17:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59906 ']' 00:07:33.999 17:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.999 17:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:33.999 17:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.999 17:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:33.999 17:57:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.999 [2024-11-05 17:57:03.174678] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:33.999 [2024-11-05 17:57:03.175068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59906 ] 00:07:34.259 [2024-11-05 17:57:03.355607] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:34.259 [2024-11-05 17:57:03.355859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.259 [2024-11-05 17:57:03.468216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.197 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:35.197 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:35.197 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:35.197 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59926 00:07:35.197 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59926 /var/tmp/spdk2.sock 00:07:35.197 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59926 ']' 00:07:35.197 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.197 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:35.197 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.197 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:35.197 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.197 [2024-11-05 17:57:04.414096] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:35.197 [2024-11-05 17:57:04.414435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59926 ] 00:07:35.456 [2024-11-05 17:57:04.596999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.715 [2024-11-05 17:57:04.823274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.251 17:57:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:38.251 17:57:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:38.251 17:57:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59926 00:07:38.251 17:57:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59926 00:07:38.251 17:57:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:38.510 17:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59906 00:07:38.510 17:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59906 ']' 00:07:38.510 17:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59906 00:07:38.510 17:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:38.510 17:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:38.510 17:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59906 00:07:38.510 killing process with pid 59906 00:07:38.510 17:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:38.510 17:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:38.510 17:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59906' 00:07:38.510 17:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59906 00:07:38.510 17:57:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59906 00:07:43.789 17:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59926 00:07:43.789 17:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59926 ']' 00:07:43.789 17:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59926 00:07:43.789 17:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:43.789 17:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:43.789 17:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59926 00:07:43.789 killing process with pid 59926 00:07:43.789 17:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:43.789 17:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:43.789 17:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59926' 00:07:43.789 17:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59926 00:07:43.789 17:57:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59926 00:07:45.694 ************************************ 00:07:45.695 END TEST locking_app_on_unlocked_coremask 00:07:45.695 ************************************ 00:07:45.695 00:07:45.695 real 0m11.454s 00:07:45.695 user 0m11.652s 00:07:45.695 sys 0m1.361s 00:07:45.695 17:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.695 17:57:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.695 17:57:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:45.695 17:57:14 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:45.695 17:57:14 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.695 17:57:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.695 ************************************ 00:07:45.695 START TEST locking_app_on_locked_coremask 00:07:45.695 ************************************ 00:07:45.695 17:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:07:45.695 17:57:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60076 00:07:45.695 17:57:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:45.695 17:57:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60076 /var/tmp/spdk.sock 00:07:45.695 17:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60076 ']' 00:07:45.695 17:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.695 17:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:45.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.695 17:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.695 17:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:45.695 17:57:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.695 [2024-11-05 17:57:14.695748] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:45.695 [2024-11-05 17:57:14.695875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60076 ] 00:07:45.695 [2024-11-05 17:57:14.862846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.695 [2024-11-05 17:57:14.969653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60092 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60092 /var/tmp/spdk2.sock 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60092 /var/tmp/spdk2.sock 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60092 /var/tmp/spdk2.sock 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60092 ']' 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:46.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:46.631 17:57:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.631 [2024-11-05 17:57:15.899850] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:46.631 [2024-11-05 17:57:15.899998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60092 ] 00:07:46.889 [2024-11-05 17:57:16.082499] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60076 has claimed it. 00:07:46.889 [2024-11-05 17:57:16.082564] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:47.456 ERROR: process (pid: 60092) is no longer running 00:07:47.456 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60092) - No such process 00:07:47.456 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:47.456 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:47.456 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:47.456 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.456 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.456 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.456 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60076 00:07:47.456 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60076 00:07:47.456 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:47.714 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60076 00:07:47.714 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60076 ']' 00:07:47.714 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60076 00:07:47.714 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:47.714 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:47.714 17:57:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60076 00:07:47.714 17:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:47.714 killing process with pid 60076 00:07:47.714 17:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:47.714 17:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60076' 00:07:47.714 17:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60076 00:07:47.714 17:57:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60076 00:07:50.245 00:07:50.245 real 0m4.730s 00:07:50.245 user 0m4.874s 00:07:50.245 sys 0m0.859s 00:07:50.245 17:57:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.245 17:57:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.245 ************************************ 00:07:50.245 END TEST locking_app_on_locked_coremask 00:07:50.245 ************************************ 00:07:50.245 17:57:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:50.245 17:57:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:50.245 17:57:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.245 17:57:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.245 ************************************ 00:07:50.245 START TEST locking_overlapped_coremask 00:07:50.245 ************************************ 00:07:50.245 17:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:07:50.245 17:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60156 00:07:50.245 17:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:50.245 17:57:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60156 /var/tmp/spdk.sock 00:07:50.245 17:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60156 ']' 00:07:50.245 17:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.245 17:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:50.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.245 17:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.245 17:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:50.245 17:57:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.245 [2024-11-05 17:57:19.526256] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:50.245 [2024-11-05 17:57:19.526381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60156 ] 00:07:50.503 [2024-11-05 17:57:19.707967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.762 [2024-11-05 17:57:19.829500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.762 [2024-11-05 17:57:19.829615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.762 [2024-11-05 17:57:19.829649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.697 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:51.697 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:51.697 17:57:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60180 00:07:51.697 17:57:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60180 /var/tmp/spdk2.sock 00:07:51.697 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:51.698 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60180 /var/tmp/spdk2.sock 00:07:51.698 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:51.698 17:57:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:51.698 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.698 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:51.698 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.698 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60180 /var/tmp/spdk2.sock 00:07:51.698 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60180 ']' 00:07:51.698 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:51.698 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:51.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:51.698 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:51.698 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:51.698 17:57:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.698 [2024-11-05 17:57:20.817050] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:51.698 [2024-11-05 17:57:20.817179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60180 ] 00:07:51.698 [2024-11-05 17:57:21.001597] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60156 has claimed it. 00:07:51.698 [2024-11-05 17:57:21.001685] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:52.266 ERROR: process (pid: 60180) is no longer running 00:07:52.266 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60180) - No such process 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60156 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 60156 ']' 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 60156 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60156 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60156' 00:07:52.266 killing process with pid 60156 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 60156 00:07:52.266 17:57:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 60156 00:07:54.814 00:07:54.814 real 0m4.490s 00:07:54.814 user 0m12.124s 00:07:54.814 sys 0m0.661s 00:07:54.814 17:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.814 17:57:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:54.814 ************************************ 00:07:54.814 END TEST locking_overlapped_coremask 00:07:54.814 ************************************ 00:07:54.814 17:57:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:54.814 17:57:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:54.814 17:57:23 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:54.814 17:57:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:54.814 ************************************ 00:07:54.814 START TEST locking_overlapped_coremask_via_rpc 00:07:54.814 ************************************ 00:07:54.814 17:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:07:54.814 17:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60244 00:07:54.814 17:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:54.814 17:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60244 /var/tmp/spdk.sock 00:07:54.814 17:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60244 ']' 00:07:54.814 17:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.814 17:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:54.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.814 17:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.814 17:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:54.814 17:57:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.814 [2024-11-05 17:57:24.067701] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:54.814 [2024-11-05 17:57:24.068111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60244 ] 00:07:55.075 [2024-11-05 17:57:24.245333] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:55.075 [2024-11-05 17:57:24.245406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.075 [2024-11-05 17:57:24.365631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.075 [2024-11-05 17:57:24.365812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.075 [2024-11-05 17:57:24.365843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.040 17:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:56.040 17:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:56.040 17:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:56.040 17:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60267 00:07:56.040 17:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60267 /var/tmp/spdk2.sock 00:07:56.040 17:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60267 ']' 00:07:56.040 17:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:56.040 17:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:56.040 17:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:56.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:56.040 17:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:56.040 17:57:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.040 [2024-11-05 17:57:25.351776] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:07:56.040 [2024-11-05 17:57:25.352079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60267 ] 00:07:56.300 [2024-11-05 17:57:25.536049] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:56.300 [2024-11-05 17:57:25.536125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.559 [2024-11-05 17:57:25.771355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.559 [2024-11-05 17:57:25.771471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.559 [2024-11-05 17:57:25.771515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:59.096 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:59.096 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:59.096 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:59.096 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.096 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.096 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.096 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:59.096 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:59.096 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:59.096 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:59.096 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.096 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:59.096 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:59.096 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:59.097 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.097 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.097 [2024-11-05 17:57:27.935641] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60244 has claimed it. 00:07:59.097 request: 00:07:59.097 { 00:07:59.097 "method": "framework_enable_cpumask_locks", 00:07:59.097 "req_id": 1 00:07:59.097 } 00:07:59.097 Got JSON-RPC error response 00:07:59.097 response: 00:07:59.097 { 00:07:59.097 "code": -32603, 00:07:59.097 "message": "Failed to claim CPU core: 2" 00:07:59.097 } 00:07:59.097 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:59.097 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:59.097 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:59.097 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:59.097 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:59.097 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60244 /var/tmp/spdk.sock 00:07:59.097 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60244 ']' 00:07:59.097 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.097 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:59.097 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.097 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:59.097 17:57:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60267 /var/tmp/spdk2.sock 00:07:59.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60267 ']' 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:59.097 ************************************ 00:07:59.097 END TEST locking_overlapped_coremask_via_rpc 00:07:59.097 ************************************ 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:59.097 00:07:59.097 real 0m4.435s 00:07:59.097 user 0m1.269s 00:07:59.097 sys 0m0.235s 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.097 17:57:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.356 17:57:28 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:59.356 17:57:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60244 ]] 00:07:59.356 17:57:28 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60244 00:07:59.356 17:57:28 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60244 ']' 00:07:59.356 17:57:28 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60244 00:07:59.356 17:57:28 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:59.356 17:57:28 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:59.356 17:57:28 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60244 00:07:59.356 killing process with pid 60244 00:07:59.356 17:57:28 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:59.356 17:57:28 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:59.356 17:57:28 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60244' 00:07:59.356 17:57:28 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60244 00:07:59.356 17:57:28 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60244 00:08:01.886 17:57:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60267 ]] 00:08:01.886 17:57:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60267 00:08:01.886 17:57:30 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60267 ']' 00:08:01.886 17:57:30 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60267 00:08:01.886 17:57:30 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:08:01.886 17:57:30 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:01.886 17:57:30 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60267 00:08:01.886 killing process with pid 60267 00:08:01.886 17:57:30 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:01.886 17:57:30 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:01.886 17:57:30 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60267' 00:08:01.886 17:57:30 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60267 00:08:01.886 17:57:30 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60267 00:08:04.421 17:57:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:04.421 17:57:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:04.421 17:57:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60244 ]] 00:08:04.422 17:57:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60244 00:08:04.422 17:57:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60244 ']' 00:08:04.422 17:57:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60244 00:08:04.422 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60244) - No such process 00:08:04.422 Process with pid 60244 is not found 00:08:04.422 17:57:33 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60244 is not found' 00:08:04.422 17:57:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60267 ]] 00:08:04.422 Process with pid 60267 is not found 00:08:04.422 17:57:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60267 00:08:04.422 17:57:33 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60267 ']' 00:08:04.422 17:57:33 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60267 00:08:04.422 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60267) - No such process 00:08:04.422 17:57:33 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60267 is not found' 00:08:04.422 17:57:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:04.422 00:08:04.422 real 0m50.672s 00:08:04.422 user 1m26.187s 00:08:04.422 sys 0m7.260s 00:08:04.422 17:57:33 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.422 17:57:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:04.422 ************************************ 00:08:04.422 END TEST cpu_locks 00:08:04.422 ************************************ 00:08:04.422 ************************************ 00:08:04.422 END TEST event 00:08:04.422 ************************************ 00:08:04.422 00:08:04.422 real 1m20.415s 00:08:04.422 user 2m22.716s 00:08:04.422 sys 0m11.548s 00:08:04.422 17:57:33 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.422 17:57:33 event -- common/autotest_common.sh@10 -- # set +x 00:08:04.422 17:57:33 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:04.422 17:57:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:04.422 17:57:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.422 17:57:33 -- common/autotest_common.sh@10 -- # set +x 00:08:04.422 ************************************ 00:08:04.422 START TEST thread 00:08:04.422 ************************************ 00:08:04.422 17:57:33 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:04.422 * Looking for test storage... 00:08:04.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:04.422 17:57:33 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:04.422 17:57:33 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:08:04.422 17:57:33 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:04.422 17:57:33 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:04.422 17:57:33 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.422 17:57:33 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.422 17:57:33 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.422 17:57:33 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.422 17:57:33 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.422 17:57:33 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.422 17:57:33 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.422 17:57:33 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.422 17:57:33 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.422 17:57:33 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.422 17:57:33 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.422 17:57:33 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:04.422 17:57:33 thread -- scripts/common.sh@345 -- # : 1 00:08:04.422 17:57:33 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.422 17:57:33 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.422 17:57:33 thread -- scripts/common.sh@365 -- # decimal 1 00:08:04.422 17:57:33 thread -- scripts/common.sh@353 -- # local d=1 00:08:04.422 17:57:33 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.422 17:57:33 thread -- scripts/common.sh@355 -- # echo 1 00:08:04.422 17:57:33 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.422 17:57:33 thread -- scripts/common.sh@366 -- # decimal 2 00:08:04.422 17:57:33 thread -- scripts/common.sh@353 -- # local d=2 00:08:04.422 17:57:33 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.681 17:57:33 thread -- scripts/common.sh@355 -- # echo 2 00:08:04.681 17:57:33 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.681 17:57:33 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.681 17:57:33 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.681 17:57:33 thread -- scripts/common.sh@368 -- # return 0 00:08:04.681 17:57:33 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.681 17:57:33 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:04.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.681 --rc genhtml_branch_coverage=1 00:08:04.681 --rc genhtml_function_coverage=1 00:08:04.681 --rc genhtml_legend=1 00:08:04.681 --rc geninfo_all_blocks=1 00:08:04.681 --rc geninfo_unexecuted_blocks=1 00:08:04.681 00:08:04.681 ' 00:08:04.681 17:57:33 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:04.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.681 --rc genhtml_branch_coverage=1 00:08:04.681 --rc genhtml_function_coverage=1 00:08:04.681 --rc genhtml_legend=1 00:08:04.681 --rc geninfo_all_blocks=1 00:08:04.681 --rc geninfo_unexecuted_blocks=1 00:08:04.681 00:08:04.681 ' 00:08:04.681 17:57:33 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:04.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.681 --rc genhtml_branch_coverage=1 00:08:04.681 --rc genhtml_function_coverage=1 00:08:04.681 --rc genhtml_legend=1 00:08:04.681 --rc geninfo_all_blocks=1 00:08:04.681 --rc geninfo_unexecuted_blocks=1 00:08:04.681 00:08:04.681 ' 00:08:04.681 17:57:33 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:04.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.681 --rc genhtml_branch_coverage=1 00:08:04.681 --rc genhtml_function_coverage=1 00:08:04.681 --rc genhtml_legend=1 00:08:04.681 --rc geninfo_all_blocks=1 00:08:04.681 --rc geninfo_unexecuted_blocks=1 00:08:04.681 00:08:04.681 ' 00:08:04.681 17:57:33 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:04.681 17:57:33 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:04.681 17:57:33 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.681 17:57:33 thread -- common/autotest_common.sh@10 -- # set +x 00:08:04.681 ************************************ 00:08:04.681 START TEST thread_poller_perf 00:08:04.681 ************************************ 00:08:04.681 17:57:33 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:04.681 [2024-11-05 17:57:33.816343] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:04.681 [2024-11-05 17:57:33.816690] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60462 ] 00:08:04.681 [2024-11-05 17:57:33.999048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.940 [2024-11-05 17:57:34.111792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.940 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:06.315 [2024-11-05T17:57:35.638Z] ====================================== 00:08:06.315 [2024-11-05T17:57:35.638Z] busy:2500119090 (cyc) 00:08:06.315 [2024-11-05T17:57:35.638Z] total_run_count: 407000 00:08:06.315 [2024-11-05T17:57:35.638Z] tsc_hz: 2490000000 (cyc) 00:08:06.315 [2024-11-05T17:57:35.638Z] ====================================== 00:08:06.315 [2024-11-05T17:57:35.638Z] poller_cost: 6142 (cyc), 2466 (nsec) 00:08:06.315 00:08:06.315 real 0m1.579s 00:08:06.315 user 0m1.361s 00:08:06.315 sys 0m0.109s 00:08:06.315 17:57:35 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.315 17:57:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:06.315 ************************************ 00:08:06.315 END TEST thread_poller_perf 00:08:06.315 ************************************ 00:08:06.315 17:57:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:06.315 17:57:35 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:08:06.315 17:57:35 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.315 17:57:35 thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.315 ************************************ 00:08:06.315 START TEST thread_poller_perf 00:08:06.315 ************************************ 00:08:06.316 17:57:35 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:06.316 [2024-11-05 17:57:35.466352] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:06.316 [2024-11-05 17:57:35.466489] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60499 ] 00:08:06.574 [2024-11-05 17:57:35.647445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.574 [2024-11-05 17:57:35.759358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.574 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:07.951 [2024-11-05T17:57:37.274Z] ====================================== 00:08:07.951 [2024-11-05T17:57:37.274Z] busy:2493713430 (cyc) 00:08:07.951 [2024-11-05T17:57:37.274Z] total_run_count: 5219000 00:08:07.952 [2024-11-05T17:57:37.275Z] tsc_hz: 2490000000 (cyc) 00:08:07.952 [2024-11-05T17:57:37.275Z] ====================================== 00:08:07.952 [2024-11-05T17:57:37.275Z] poller_cost: 477 (cyc), 191 (nsec) 00:08:07.952 00:08:07.952 real 0m1.568s 00:08:07.952 user 0m1.351s 00:08:07.952 sys 0m0.109s 00:08:07.952 ************************************ 00:08:07.952 END TEST thread_poller_perf 00:08:07.952 ************************************ 00:08:07.952 17:57:36 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.952 17:57:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:07.952 17:57:37 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:07.952 00:08:07.952 real 0m3.530s 00:08:07.952 user 0m2.879s 00:08:07.952 sys 0m0.437s 00:08:07.952 ************************************ 00:08:07.952 END TEST thread 00:08:07.952 ************************************ 00:08:07.952 17:57:37 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.952 17:57:37 thread -- common/autotest_common.sh@10 -- # set +x 00:08:07.952 17:57:37 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:07.952 17:57:37 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:07.952 17:57:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:07.952 17:57:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.952 17:57:37 -- common/autotest_common.sh@10 -- # set +x 00:08:07.952 ************************************ 00:08:07.952 START TEST app_cmdline 00:08:07.952 ************************************ 00:08:07.952 17:57:37 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:07.952 * Looking for test storage... 00:08:07.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:07.952 17:57:37 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:07.952 17:57:37 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:08:07.952 17:57:37 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:08.211 17:57:37 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:08.211 17:57:37 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:08.211 17:57:37 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.211 17:57:37 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:08.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.211 --rc genhtml_branch_coverage=1 00:08:08.211 --rc genhtml_function_coverage=1 00:08:08.211 --rc genhtml_legend=1 00:08:08.211 --rc geninfo_all_blocks=1 00:08:08.211 --rc geninfo_unexecuted_blocks=1 00:08:08.211 00:08:08.211 ' 00:08:08.211 17:57:37 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:08.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.211 --rc genhtml_branch_coverage=1 00:08:08.211 --rc genhtml_function_coverage=1 00:08:08.211 --rc genhtml_legend=1 00:08:08.211 --rc geninfo_all_blocks=1 00:08:08.211 --rc geninfo_unexecuted_blocks=1 00:08:08.211 00:08:08.211 ' 00:08:08.211 17:57:37 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:08.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.211 --rc genhtml_branch_coverage=1 00:08:08.211 --rc genhtml_function_coverage=1 00:08:08.211 --rc genhtml_legend=1 00:08:08.211 --rc geninfo_all_blocks=1 00:08:08.211 --rc geninfo_unexecuted_blocks=1 00:08:08.211 00:08:08.211 ' 00:08:08.211 17:57:37 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:08.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.211 --rc genhtml_branch_coverage=1 00:08:08.211 --rc genhtml_function_coverage=1 00:08:08.211 --rc genhtml_legend=1 00:08:08.211 --rc geninfo_all_blocks=1 00:08:08.211 --rc geninfo_unexecuted_blocks=1 00:08:08.211 00:08:08.211 ' 00:08:08.211 17:57:37 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:08.211 17:57:37 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60588 00:08:08.211 17:57:37 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:08.211 17:57:37 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60588 00:08:08.211 17:57:37 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 60588 ']' 00:08:08.211 17:57:37 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.211 17:57:37 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:08.211 17:57:37 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.211 17:57:37 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:08.211 17:57:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:08.211 [2024-11-05 17:57:37.451439] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:08.211 [2024-11-05 17:57:37.451804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60588 ] 00:08:08.470 [2024-11-05 17:57:37.633435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.470 [2024-11-05 17:57:37.750423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.406 17:57:38 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:09.406 17:57:38 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:08:09.406 17:57:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:09.665 { 00:08:09.666 "version": "SPDK v25.01-pre git sha1 8053cd6b8", 00:08:09.666 "fields": { 00:08:09.666 "major": 25, 00:08:09.666 "minor": 1, 00:08:09.666 "patch": 0, 00:08:09.666 "suffix": "-pre", 00:08:09.666 "commit": "8053cd6b8" 00:08:09.666 } 00:08:09.666 } 00:08:09.666 17:57:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:09.666 17:57:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:09.666 17:57:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:09.666 17:57:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:09.666 17:57:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:09.666 17:57:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:09.666 17:57:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:09.666 17:57:38 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.666 17:57:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:09.666 17:57:38 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.666 17:57:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:09.666 17:57:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:09.666 17:57:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.666 17:57:38 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:08:09.666 17:57:38 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.666 17:57:38 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.666 17:57:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.666 17:57:38 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.666 17:57:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.666 17:57:38 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.666 17:57:38 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.666 17:57:38 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:09.666 17:57:38 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:09.666 17:57:38 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:09.925 request: 00:08:09.925 { 00:08:09.925 "method": "env_dpdk_get_mem_stats", 00:08:09.925 "req_id": 1 00:08:09.925 } 00:08:09.925 Got JSON-RPC error response 00:08:09.925 response: 00:08:09.925 { 00:08:09.925 "code": -32601, 00:08:09.925 "message": "Method not found" 00:08:09.925 } 00:08:09.925 17:57:39 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:08:09.925 17:57:39 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:09.925 17:57:39 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:09.925 17:57:39 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:09.925 17:57:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60588 00:08:09.925 17:57:39 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 60588 ']' 00:08:09.925 17:57:39 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 60588 00:08:09.925 17:57:39 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:08:09.925 17:57:39 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:09.925 17:57:39 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60588 00:08:09.925 killing process with pid 60588 00:08:09.925 17:57:39 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:09.925 17:57:39 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:09.925 17:57:39 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60588' 00:08:09.925 17:57:39 app_cmdline -- common/autotest_common.sh@971 -- # kill 60588 00:08:09.925 17:57:39 app_cmdline -- common/autotest_common.sh@976 -- # wait 60588 00:08:12.459 00:08:12.459 real 0m4.371s 00:08:12.459 user 0m4.519s 00:08:12.459 sys 0m0.662s 00:08:12.459 17:57:41 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:12.459 ************************************ 00:08:12.459 END TEST app_cmdline 00:08:12.459 ************************************ 00:08:12.459 17:57:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:12.459 17:57:41 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:12.459 17:57:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:12.459 17:57:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.459 17:57:41 -- common/autotest_common.sh@10 -- # set +x 00:08:12.459 ************************************ 00:08:12.459 START TEST version 00:08:12.459 ************************************ 00:08:12.459 17:57:41 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:12.459 * Looking for test storage... 00:08:12.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:12.459 17:57:41 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:12.459 17:57:41 version -- common/autotest_common.sh@1691 -- # lcov --version 00:08:12.459 17:57:41 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:12.459 17:57:41 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:12.459 17:57:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.459 17:57:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.459 17:57:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.459 17:57:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.459 17:57:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.459 17:57:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.459 17:57:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.459 17:57:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.459 17:57:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.459 17:57:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.459 17:57:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.459 17:57:41 version -- scripts/common.sh@344 -- # case "$op" in 00:08:12.459 17:57:41 version -- scripts/common.sh@345 -- # : 1 00:08:12.459 17:57:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.459 17:57:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.459 17:57:41 version -- scripts/common.sh@365 -- # decimal 1 00:08:12.459 17:57:41 version -- scripts/common.sh@353 -- # local d=1 00:08:12.459 17:57:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.459 17:57:41 version -- scripts/common.sh@355 -- # echo 1 00:08:12.459 17:57:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.718 17:57:41 version -- scripts/common.sh@366 -- # decimal 2 00:08:12.718 17:57:41 version -- scripts/common.sh@353 -- # local d=2 00:08:12.718 17:57:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.718 17:57:41 version -- scripts/common.sh@355 -- # echo 2 00:08:12.718 17:57:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.718 17:57:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.718 17:57:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.718 17:57:41 version -- scripts/common.sh@368 -- # return 0 00:08:12.718 17:57:41 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.718 17:57:41 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:12.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.718 --rc genhtml_branch_coverage=1 00:08:12.718 --rc genhtml_function_coverage=1 00:08:12.718 --rc genhtml_legend=1 00:08:12.718 --rc geninfo_all_blocks=1 00:08:12.718 --rc geninfo_unexecuted_blocks=1 00:08:12.718 00:08:12.718 ' 00:08:12.718 17:57:41 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:12.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.718 --rc genhtml_branch_coverage=1 00:08:12.718 --rc genhtml_function_coverage=1 00:08:12.718 --rc genhtml_legend=1 00:08:12.718 --rc geninfo_all_blocks=1 00:08:12.718 --rc geninfo_unexecuted_blocks=1 00:08:12.718 00:08:12.718 ' 00:08:12.718 17:57:41 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:12.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.718 --rc genhtml_branch_coverage=1 00:08:12.718 --rc genhtml_function_coverage=1 00:08:12.718 --rc genhtml_legend=1 00:08:12.718 --rc geninfo_all_blocks=1 00:08:12.718 --rc geninfo_unexecuted_blocks=1 00:08:12.718 00:08:12.718 ' 00:08:12.718 17:57:41 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:12.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.718 --rc genhtml_branch_coverage=1 00:08:12.718 --rc genhtml_function_coverage=1 00:08:12.718 --rc genhtml_legend=1 00:08:12.718 --rc geninfo_all_blocks=1 00:08:12.718 --rc geninfo_unexecuted_blocks=1 00:08:12.718 00:08:12.718 ' 00:08:12.718 17:57:41 version -- app/version.sh@17 -- # get_header_version major 00:08:12.718 17:57:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:12.718 17:57:41 version -- app/version.sh@14 -- # cut -f2 00:08:12.718 17:57:41 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.718 17:57:41 version -- app/version.sh@17 -- # major=25 00:08:12.718 17:57:41 version -- app/version.sh@18 -- # get_header_version minor 00:08:12.718 17:57:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:12.718 17:57:41 version -- app/version.sh@14 -- # cut -f2 00:08:12.718 17:57:41 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.718 17:57:41 version -- app/version.sh@18 -- # minor=1 00:08:12.718 17:57:41 version -- app/version.sh@19 -- # get_header_version patch 00:08:12.718 17:57:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:12.718 17:57:41 version -- app/version.sh@14 -- # cut -f2 00:08:12.718 17:57:41 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.718 17:57:41 version -- app/version.sh@19 -- # patch=0 00:08:12.718 17:57:41 version -- app/version.sh@20 -- # get_header_version suffix 00:08:12.718 17:57:41 version -- app/version.sh@14 -- # cut -f2 00:08:12.718 17:57:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:12.718 17:57:41 version -- app/version.sh@14 -- # tr -d '"' 00:08:12.718 17:57:41 version -- app/version.sh@20 -- # suffix=-pre 00:08:12.718 17:57:41 version -- app/version.sh@22 -- # version=25.1 00:08:12.718 17:57:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:12.718 17:57:41 version -- app/version.sh@28 -- # version=25.1rc0 00:08:12.718 17:57:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:12.718 17:57:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:12.718 17:57:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:12.718 17:57:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:12.718 ************************************ 00:08:12.718 END TEST version 00:08:12.718 ************************************ 00:08:12.718 00:08:12.718 real 0m0.333s 00:08:12.718 user 0m0.186s 00:08:12.718 sys 0m0.204s 00:08:12.718 17:57:41 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:12.718 17:57:41 version -- common/autotest_common.sh@10 -- # set +x 00:08:12.718 17:57:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:12.718 17:57:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:12.718 17:57:41 -- spdk/autotest.sh@194 -- # uname -s 00:08:12.718 17:57:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:12.718 17:57:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:12.718 17:57:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:12.718 17:57:41 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:08:12.718 17:57:41 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:12.718 17:57:41 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:12.718 17:57:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.718 17:57:41 -- common/autotest_common.sh@10 -- # set +x 00:08:12.718 ************************************ 00:08:12.718 START TEST blockdev_nvme 00:08:12.718 ************************************ 00:08:12.718 17:57:41 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:12.978 * Looking for test storage... 00:08:12.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:12.978 17:57:42 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:12.978 17:57:42 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:08:12.978 17:57:42 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:12.978 17:57:42 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:12.978 17:57:42 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.978 17:57:42 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.978 17:57:42 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.978 17:57:42 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.978 17:57:42 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.978 17:57:42 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.978 17:57:42 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.978 17:57:42 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.978 17:57:42 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.978 17:57:42 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.978 17:57:42 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.978 17:57:42 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:12.978 17:57:42 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.979 17:57:42 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:08:12.979 17:57:42 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.979 17:57:42 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:12.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.979 --rc genhtml_branch_coverage=1 00:08:12.979 --rc genhtml_function_coverage=1 00:08:12.979 --rc genhtml_legend=1 00:08:12.979 --rc geninfo_all_blocks=1 00:08:12.979 --rc geninfo_unexecuted_blocks=1 00:08:12.979 00:08:12.979 ' 00:08:12.979 17:57:42 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:12.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.979 --rc genhtml_branch_coverage=1 00:08:12.979 --rc genhtml_function_coverage=1 00:08:12.979 --rc genhtml_legend=1 00:08:12.979 --rc geninfo_all_blocks=1 00:08:12.979 --rc geninfo_unexecuted_blocks=1 00:08:12.979 00:08:12.979 ' 00:08:12.979 17:57:42 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:12.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.979 --rc genhtml_branch_coverage=1 00:08:12.979 --rc genhtml_function_coverage=1 00:08:12.979 --rc genhtml_legend=1 00:08:12.979 --rc geninfo_all_blocks=1 00:08:12.979 --rc geninfo_unexecuted_blocks=1 00:08:12.979 00:08:12.979 ' 00:08:12.979 17:57:42 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:12.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.979 --rc genhtml_branch_coverage=1 00:08:12.979 --rc genhtml_function_coverage=1 00:08:12.979 --rc genhtml_legend=1 00:08:12.979 --rc geninfo_all_blocks=1 00:08:12.979 --rc geninfo_unexecuted_blocks=1 00:08:12.979 00:08:12.979 ' 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:12.979 17:57:42 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60777 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:12.979 17:57:42 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60777 00:08:12.979 17:57:42 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 60777 ']' 00:08:12.979 17:57:42 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.979 17:57:42 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:12.979 17:57:42 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.979 17:57:42 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:12.979 17:57:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.237 [2024-11-05 17:57:42.328127] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:13.237 [2024-11-05 17:57:42.328404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60777 ] 00:08:13.238 [2024-11-05 17:57:42.507031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.497 [2024-11-05 17:57:42.621344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.433 17:57:43 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:14.433 17:57:43 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:08:14.433 17:57:43 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:14.433 17:57:43 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:08:14.433 17:57:43 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:14.433 17:57:43 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:14.433 17:57:43 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:14.433 17:57:43 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:14.433 17:57:43 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.433 17:57:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.692 17:57:43 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.692 17:57:43 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:08:14.692 17:57:43 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.692 17:57:43 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.692 17:57:43 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.692 17:57:43 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:14.692 17:57:43 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.692 17:57:43 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:14.692 17:57:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.952 17:57:44 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.952 17:57:44 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:14.952 17:57:44 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:14.953 17:57:44 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a60b332c-3ac2-4a66-8fd8-97aaa8e1bb92"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a60b332c-3ac2-4a66-8fd8-97aaa8e1bb92",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "ec554be7-089f-47c8-b2d9-358dda2d6471"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ec554be7-089f-47c8-b2d9-358dda2d6471",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "0ab56454-c329-4fa0-969f-2c9535539ab0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0ab56454-c329-4fa0-969f-2c9535539ab0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d29200e3-3f53-47e1-9b1c-c9eeeced8110"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d29200e3-3f53-47e1-9b1c-c9eeeced8110",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "dfd18bb2-31be-4dfc-9d17-a85999b632f4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dfd18bb2-31be-4dfc-9d17-a85999b632f4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1c525439-8114-42db-9cd6-1604606cc689"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1c525439-8114-42db-9cd6-1604606cc689",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:14.953 17:57:44 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:14.953 17:57:44 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:14.953 17:57:44 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:14.953 17:57:44 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60777 00:08:14.953 17:57:44 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 60777 ']' 00:08:14.953 17:57:44 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 60777 00:08:14.953 17:57:44 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:08:14.953 17:57:44 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:14.953 17:57:44 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60777 00:08:14.953 killing process with pid 60777 00:08:14.953 17:57:44 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:14.953 17:57:44 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:14.953 17:57:44 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60777' 00:08:14.953 17:57:44 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 60777 00:08:14.953 17:57:44 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 60777 00:08:17.489 17:57:46 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:17.489 17:57:46 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:17.489 17:57:46 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:08:17.489 17:57:46 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:17.489 17:57:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:17.489 ************************************ 00:08:17.489 START TEST bdev_hello_world 00:08:17.489 ************************************ 00:08:17.489 17:57:46 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:17.489 [2024-11-05 17:57:46.615469] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:17.489 [2024-11-05 17:57:46.615584] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60880 ] 00:08:17.489 [2024-11-05 17:57:46.798230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.748 [2024-11-05 17:57:46.912710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.315 [2024-11-05 17:57:47.563435] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:18.315 [2024-11-05 17:57:47.563484] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:18.315 [2024-11-05 17:57:47.563522] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:18.315 [2024-11-05 17:57:47.566360] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:18.315 [2024-11-05 17:57:47.566994] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:18.315 [2024-11-05 17:57:47.567030] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:18.315 [2024-11-05 17:57:47.567249] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:18.315 00:08:18.315 [2024-11-05 17:57:47.567277] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:19.699 00:08:19.699 real 0m2.116s 00:08:19.699 user 0m1.753s 00:08:19.699 sys 0m0.254s 00:08:19.699 17:57:48 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:19.699 ************************************ 00:08:19.699 END TEST bdev_hello_world 00:08:19.699 ************************************ 00:08:19.699 17:57:48 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:19.699 17:57:48 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:19.699 17:57:48 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:19.699 17:57:48 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:19.699 17:57:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.699 ************************************ 00:08:19.699 START TEST bdev_bounds 00:08:19.699 ************************************ 00:08:19.699 17:57:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:08:19.699 17:57:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60922 00:08:19.699 17:57:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:19.699 17:57:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:19.699 Process bdevio pid: 60922 00:08:19.699 17:57:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60922' 00:08:19.699 17:57:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60922 00:08:19.699 17:57:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 60922 ']' 00:08:19.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.699 17:57:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.699 17:57:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:19.699 17:57:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.699 17:57:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:19.699 17:57:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:19.699 [2024-11-05 17:57:48.811355] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:19.699 [2024-11-05 17:57:48.811716] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60922 ] 00:08:19.699 [2024-11-05 17:57:48.989041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:19.959 [2024-11-05 17:57:49.107102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.959 [2024-11-05 17:57:49.107248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.959 [2024-11-05 17:57:49.107283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.526 17:57:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:20.526 17:57:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:08:20.526 17:57:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:20.784 I/O targets: 00:08:20.784 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:20.784 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:20.784 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:20.784 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:20.785 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:20.785 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:20.785 00:08:20.785 00:08:20.785 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.785 http://cunit.sourceforge.net/ 00:08:20.785 00:08:20.785 00:08:20.785 Suite: bdevio tests on: Nvme3n1 00:08:20.785 Test: blockdev write read block ...passed 00:08:20.785 Test: blockdev write zeroes read block ...passed 00:08:20.785 Test: blockdev write zeroes read no split ...passed 00:08:20.785 Test: blockdev write zeroes read split ...passed 00:08:20.785 Test: blockdev write zeroes read split partial ...passed 00:08:20.785 Test: blockdev reset ...[2024-11-05 17:57:49.966039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:20.785 [2024-11-05 17:57:49.969586] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:08:20.785 Test: blockdev write read 8 blocks ...uccessful. 00:08:20.785 passed 00:08:20.785 Test: blockdev write read size > 128k ...passed 00:08:20.785 Test: blockdev write read invalid size ...passed 00:08:20.785 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:20.785 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:20.785 Test: blockdev write read max offset ...passed 00:08:20.785 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:20.785 Test: blockdev writev readv 8 blocks ...passed 00:08:20.785 Test: blockdev writev readv 30 x 1block ...passed 00:08:20.785 Test: blockdev writev readv block ...passed 00:08:20.785 Test: blockdev writev readv size > 128k ...passed 00:08:20.785 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:20.785 Test: blockdev comparev and writev ...[2024-11-05 17:57:49.978075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba80a000 len:0x1000 00:08:20.785 [2024-11-05 17:57:49.978121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:20.785 passed 00:08:20.785 Test: blockdev nvme passthru rw ...passed 00:08:20.785 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:57:49.978925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:20.785 [2024-11-05 17:57:49.978955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:20.785 passed 00:08:20.785 Test: blockdev nvme admin passthru ...passed 00:08:20.785 Test: blockdev copy ...passed 00:08:20.785 Suite: bdevio tests on: Nvme2n3 00:08:20.785 Test: blockdev write read block ...passed 00:08:20.785 Test: blockdev write zeroes read block ...passed 00:08:20.785 Test: blockdev write zeroes read no split ...passed 00:08:20.785 Test: blockdev write zeroes read split ...passed 00:08:20.785 Test: blockdev write zeroes read split partial ...passed 00:08:20.785 Test: blockdev reset ...[2024-11-05 17:57:50.055660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:20.785 [2024-11-05 17:57:50.059452] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:20.785 Test: blockdev write read 8 blocks ...uccessful. 00:08:20.785 passed 00:08:20.785 Test: blockdev write read size > 128k ...passed 00:08:20.785 Test: blockdev write read invalid size ...passed 00:08:20.785 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:20.785 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:20.785 Test: blockdev write read max offset ...passed 00:08:20.785 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:20.785 Test: blockdev writev readv 8 blocks ...passed 00:08:20.785 Test: blockdev writev readv 30 x 1block ...passed 00:08:20.785 Test: blockdev writev readv block ...passed 00:08:20.785 Test: blockdev writev readv size > 128k ...passed 00:08:20.785 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:20.785 Test: blockdev comparev and writev ...[2024-11-05 17:57:50.067384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:08:20.785 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x29e206000 len:0x1000 00:08:20.785 [2024-11-05 17:57:50.067559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:20.785 passed 00:08:20.785 Test: blockdev nvme passthru vendor specific ...passed 00:08:20.785 Test: blockdev nvme admin passthru ...[2024-11-05 17:57:50.068330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:20.785 [2024-11-05 17:57:50.068367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:20.785 passed 00:08:20.785 Test: blockdev copy ...passed 00:08:20.785 Suite: bdevio tests on: Nvme2n2 00:08:20.785 Test: blockdev write read block ...passed 00:08:20.785 Test: blockdev write zeroes read block ...passed 00:08:20.785 Test: blockdev write zeroes read no split ...passed 00:08:21.044 Test: blockdev write zeroes read split ...passed 00:08:21.044 Test: blockdev write zeroes read split partial ...passed 00:08:21.044 Test: blockdev reset ...[2024-11-05 17:57:50.145696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:21.044 passed 00:08:21.044 Test: blockdev write read 8 blocks ...[2024-11-05 17:57:50.149148] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:21.044 passed 00:08:21.044 Test: blockdev write read size > 128k ...passed 00:08:21.044 Test: blockdev write read invalid size ...passed 00:08:21.044 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:21.044 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:21.044 Test: blockdev write read max offset ...passed 00:08:21.044 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:21.044 Test: blockdev writev readv 8 blocks ...passed 00:08:21.044 Test: blockdev writev readv 30 x 1block ...passed 00:08:21.044 Test: blockdev writev readv block ...passed 00:08:21.044 Test: blockdev writev readv size > 128k ...passed 00:08:21.044 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:21.044 Test: blockdev comparev and writev ...[2024-11-05 17:57:50.156611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d603c000 len:0x1000 00:08:21.044 [2024-11-05 17:57:50.156669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:21.044 passed 00:08:21.044 Test: blockdev nvme passthru rw ...passed 00:08:21.044 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:57:50.157419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:21.044 [2024-11-05 17:57:50.157447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:21.044 passed 00:08:21.044 Test: blockdev nvme admin passthru ...passed 00:08:21.044 Test: blockdev copy ...passed 00:08:21.044 Suite: bdevio tests on: Nvme2n1 00:08:21.044 Test: blockdev write read block ...passed 00:08:21.044 Test: blockdev write zeroes read block ...passed 00:08:21.044 Test: blockdev write zeroes read no split ...passed 00:08:21.044 Test: blockdev write zeroes read split ...passed 00:08:21.044 Test: blockdev write zeroes read split partial ...passed 00:08:21.045 Test: blockdev reset ...[2024-11-05 17:57:50.230875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:21.045 [2024-11-05 17:57:50.235154] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:21.045 Test: blockdev write read 8 blocks ...uccessful. 00:08:21.045 passed 00:08:21.045 Test: blockdev write read size > 128k ...passed 00:08:21.045 Test: blockdev write read invalid size ...passed 00:08:21.045 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:21.045 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:21.045 Test: blockdev write read max offset ...passed 00:08:21.045 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:21.045 Test: blockdev writev readv 8 blocks ...passed 00:08:21.045 Test: blockdev writev readv 30 x 1block ...passed 00:08:21.045 Test: blockdev writev readv block ...passed 00:08:21.045 Test: blockdev writev readv size > 128k ...passed 00:08:21.045 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:21.045 Test: blockdev comparev and writev ...[2024-11-05 17:57:50.244716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6038000 len:0x1000 00:08:21.045 [2024-11-05 17:57:50.244763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:21.045 passed 00:08:21.045 Test: blockdev nvme passthru rw ...passed 00:08:21.045 Test: blockdev nvme passthru vendor specific ...passed 00:08:21.045 Test: blockdev nvme admin passthru ...[2024-11-05 17:57:50.245621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:21.045 [2024-11-05 17:57:50.245656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:21.045 passed 00:08:21.045 Test: blockdev copy ...passed 00:08:21.045 Suite: bdevio tests on: Nvme1n1 00:08:21.045 Test: blockdev write read block ...passed 00:08:21.045 Test: blockdev write zeroes read block ...passed 00:08:21.045 Test: blockdev write zeroes read no split ...passed 00:08:21.045 Test: blockdev write zeroes read split ...passed 00:08:21.045 Test: blockdev write zeroes read split partial ...passed 00:08:21.045 Test: blockdev reset ...[2024-11-05 17:57:50.325783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:21.045 [2024-11-05 17:57:50.329393] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:08:21.045 Test: blockdev write read 8 blocks ...uccessful. 00:08:21.045 passed 00:08:21.045 Test: blockdev write read size > 128k ...passed 00:08:21.045 Test: blockdev write read invalid size ...passed 00:08:21.045 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:21.045 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:21.045 Test: blockdev write read max offset ...passed 00:08:21.045 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:21.045 Test: blockdev writev readv 8 blocks ...passed 00:08:21.045 Test: blockdev writev readv 30 x 1block ...passed 00:08:21.045 Test: blockdev writev readv block ...passed 00:08:21.045 Test: blockdev writev readv size > 128k ...passed 00:08:21.045 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:21.045 Test: blockdev comparev and writev ...[2024-11-05 17:57:50.338685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6034000 len:0x1000 00:08:21.045 [2024-11-05 17:57:50.338855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:21.045 passed 00:08:21.045 Test: blockdev nvme passthru rw ...passed 00:08:21.045 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:57:50.339922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:21.045 [2024-11-05 17:57:50.340062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:21.045 passed 00:08:21.045 Test: blockdev nvme admin passthru ...passed 00:08:21.045 Test: blockdev copy ...passed 00:08:21.045 Suite: bdevio tests on: Nvme0n1 00:08:21.045 Test: blockdev write read block ...passed 00:08:21.045 Test: blockdev write zeroes read block ...passed 00:08:21.045 Test: blockdev write zeroes read no split ...passed 00:08:21.304 Test: blockdev write zeroes read split ...passed 00:08:21.304 Test: blockdev write zeroes read split partial ...passed 00:08:21.304 Test: blockdev reset ...[2024-11-05 17:57:50.418541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:21.304 [2024-11-05 17:57:50.422308] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:08:21.304 Test: blockdev write read 8 blocks ...uccessful. 00:08:21.304 passed 00:08:21.304 Test: blockdev write read size > 128k ...passed 00:08:21.304 Test: blockdev write read invalid size ...passed 00:08:21.304 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:21.304 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:21.304 Test: blockdev write read max offset ...passed 00:08:21.304 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:21.304 Test: blockdev writev readv 8 blocks ...passed 00:08:21.304 Test: blockdev writev readv 30 x 1block ...passed 00:08:21.304 Test: blockdev writev readv block ...passed 00:08:21.304 Test: blockdev writev readv size > 128k ...passed 00:08:21.304 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:21.304 Test: blockdev comparev and writev ...[2024-11-05 17:57:50.430247] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:21.304 separate metadata which is not supported yet. 00:08:21.304 passed 00:08:21.304 Test: blockdev nvme passthru rw ...passed 00:08:21.304 Test: blockdev nvme passthru vendor specific ...passed 00:08:21.304 Test: blockdev nvme admin passthru ...[2024-11-05 17:57:50.430941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:21.304 [2024-11-05 17:57:50.430989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:21.304 passed 00:08:21.304 Test: blockdev copy ...passed 00:08:21.304 00:08:21.304 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.304 suites 6 6 n/a 0 0 00:08:21.304 tests 138 138 138 0 0 00:08:21.305 asserts 893 893 893 0 n/a 00:08:21.305 00:08:21.305 Elapsed time = 1.456 seconds 00:08:21.305 0 00:08:21.305 17:57:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60922 00:08:21.305 17:57:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 60922 ']' 00:08:21.305 17:57:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 60922 00:08:21.305 17:57:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:08:21.305 17:57:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:21.305 17:57:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60922 00:08:21.305 killing process with pid 60922 00:08:21.305 17:57:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:21.305 17:57:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:21.305 17:57:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60922' 00:08:21.305 17:57:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 60922 00:08:21.305 17:57:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 60922 00:08:22.239 17:57:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:22.239 00:08:22.239 real 0m2.812s 00:08:22.239 user 0m7.239s 00:08:22.239 sys 0m0.410s 00:08:22.239 17:57:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:22.239 17:57:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:22.239 ************************************ 00:08:22.239 END TEST bdev_bounds 00:08:22.239 ************************************ 00:08:22.498 17:57:51 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:22.498 17:57:51 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:22.498 17:57:51 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.498 17:57:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.498 ************************************ 00:08:22.498 START TEST bdev_nbd 00:08:22.498 ************************************ 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60982 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60982 /var/tmp/spdk-nbd.sock 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 60982 ']' 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:22.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:22.498 17:57:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:22.498 [2024-11-05 17:57:51.720268] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:22.498 [2024-11-05 17:57:51.720393] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.759 [2024-11-05 17:57:51.906626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.759 [2024-11-05 17:57:52.021791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:23.692 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:23.983 1+0 records in 00:08:23.983 1+0 records out 00:08:23.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635315 s, 6.4 MB/s 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:23.983 1+0 records in 00:08:23.983 1+0 records out 00:08:23.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495819 s, 8.3 MB/s 00:08:23.983 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.241 1+0 records in 00:08:24.241 1+0 records out 00:08:24.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765395 s, 5.4 MB/s 00:08:24.241 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.500 1+0 records in 00:08:24.500 1+0 records out 00:08:24.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526912 s, 7.8 MB/s 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:24.500 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.759 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:24.759 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:24.759 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:24.759 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:24.759 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:24.759 1+0 records in 00:08:24.759 1+0 records out 00:08:24.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635074 s, 6.4 MB/s 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:24.759 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.018 1+0 records in 00:08:25.018 1+0 records out 00:08:25.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804945 s, 5.1 MB/s 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:25.018 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:25.277 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.277 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:25.277 { 00:08:25.277 "nbd_device": "/dev/nbd0", 00:08:25.277 "bdev_name": "Nvme0n1" 00:08:25.277 }, 00:08:25.277 { 00:08:25.277 "nbd_device": "/dev/nbd1", 00:08:25.277 "bdev_name": "Nvme1n1" 00:08:25.277 }, 00:08:25.277 { 00:08:25.277 "nbd_device": "/dev/nbd2", 00:08:25.277 "bdev_name": "Nvme2n1" 00:08:25.277 }, 00:08:25.277 { 00:08:25.277 "nbd_device": "/dev/nbd3", 00:08:25.277 "bdev_name": "Nvme2n2" 00:08:25.277 }, 00:08:25.277 { 00:08:25.277 "nbd_device": "/dev/nbd4", 00:08:25.277 "bdev_name": "Nvme2n3" 00:08:25.277 }, 00:08:25.277 { 00:08:25.277 "nbd_device": "/dev/nbd5", 00:08:25.277 "bdev_name": "Nvme3n1" 00:08:25.277 } 00:08:25.277 ]' 00:08:25.277 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:25.277 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:25.277 { 00:08:25.277 "nbd_device": "/dev/nbd0", 00:08:25.277 "bdev_name": "Nvme0n1" 00:08:25.277 }, 00:08:25.277 { 00:08:25.277 "nbd_device": "/dev/nbd1", 00:08:25.277 "bdev_name": "Nvme1n1" 00:08:25.277 }, 00:08:25.277 { 00:08:25.277 "nbd_device": "/dev/nbd2", 00:08:25.277 "bdev_name": "Nvme2n1" 00:08:25.277 }, 00:08:25.277 { 00:08:25.277 "nbd_device": "/dev/nbd3", 00:08:25.277 "bdev_name": "Nvme2n2" 00:08:25.277 }, 00:08:25.277 { 00:08:25.277 "nbd_device": "/dev/nbd4", 00:08:25.277 "bdev_name": "Nvme2n3" 00:08:25.277 }, 00:08:25.277 { 00:08:25.277 "nbd_device": "/dev/nbd5", 00:08:25.277 "bdev_name": "Nvme3n1" 00:08:25.277 } 00:08:25.277 ]' 00:08:25.277 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:25.277 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:25.277 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.277 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:25.277 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.277 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:25.277 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.277 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:25.537 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:25.537 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:25.537 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:25.537 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.537 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.538 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:25.538 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:25.538 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.538 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.538 17:57:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:25.814 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:25.814 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:25.814 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:25.814 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.814 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.814 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:25.814 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:25.814 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.814 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.814 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:26.072 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:26.072 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:26.072 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:26.072 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.072 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.072 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:26.072 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:26.072 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.072 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.072 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.330 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.588 17:57:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:26.846 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:27.104 /dev/nbd0 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.104 1+0 records in 00:08:27.104 1+0 records out 00:08:27.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385612 s, 10.6 MB/s 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.104 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:27.362 /dev/nbd1 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.362 1+0 records in 00:08:27.362 1+0 records out 00:08:27.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501138 s, 8.2 MB/s 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.362 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:27.621 /dev/nbd10 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.621 1+0 records in 00:08:27.621 1+0 records out 00:08:27.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712919 s, 5.7 MB/s 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.621 17:57:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:27.879 /dev/nbd11 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.879 1+0 records in 00:08:27.879 1+0 records out 00:08:27.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00076689 s, 5.3 MB/s 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:27.879 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:28.137 /dev/nbd12 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.137 1+0 records in 00:08:28.137 1+0 records out 00:08:28.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657397 s, 6.2 MB/s 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:28.137 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:28.396 /dev/nbd13 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:28.396 1+0 records in 00:08:28.396 1+0 records out 00:08:28.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000802786 s, 5.1 MB/s 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.396 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:28.655 { 00:08:28.655 "nbd_device": "/dev/nbd0", 00:08:28.655 "bdev_name": "Nvme0n1" 00:08:28.655 }, 00:08:28.655 { 00:08:28.655 "nbd_device": "/dev/nbd1", 00:08:28.655 "bdev_name": "Nvme1n1" 00:08:28.655 }, 00:08:28.655 { 00:08:28.655 "nbd_device": "/dev/nbd10", 00:08:28.655 "bdev_name": "Nvme2n1" 00:08:28.655 }, 00:08:28.655 { 00:08:28.655 "nbd_device": "/dev/nbd11", 00:08:28.655 "bdev_name": "Nvme2n2" 00:08:28.655 }, 00:08:28.655 { 00:08:28.655 "nbd_device": "/dev/nbd12", 00:08:28.655 "bdev_name": "Nvme2n3" 00:08:28.655 }, 00:08:28.655 { 00:08:28.655 "nbd_device": "/dev/nbd13", 00:08:28.655 "bdev_name": "Nvme3n1" 00:08:28.655 } 00:08:28.655 ]' 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:28.655 { 00:08:28.655 "nbd_device": "/dev/nbd0", 00:08:28.655 "bdev_name": "Nvme0n1" 00:08:28.655 }, 00:08:28.655 { 00:08:28.655 "nbd_device": "/dev/nbd1", 00:08:28.655 "bdev_name": "Nvme1n1" 00:08:28.655 }, 00:08:28.655 { 00:08:28.655 "nbd_device": "/dev/nbd10", 00:08:28.655 "bdev_name": "Nvme2n1" 00:08:28.655 }, 00:08:28.655 { 00:08:28.655 "nbd_device": "/dev/nbd11", 00:08:28.655 "bdev_name": "Nvme2n2" 00:08:28.655 }, 00:08:28.655 { 00:08:28.655 "nbd_device": "/dev/nbd12", 00:08:28.655 "bdev_name": "Nvme2n3" 00:08:28.655 }, 00:08:28.655 { 00:08:28.655 "nbd_device": "/dev/nbd13", 00:08:28.655 "bdev_name": "Nvme3n1" 00:08:28.655 } 00:08:28.655 ]' 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:28.655 /dev/nbd1 00:08:28.655 /dev/nbd10 00:08:28.655 /dev/nbd11 00:08:28.655 /dev/nbd12 00:08:28.655 /dev/nbd13' 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:28.655 /dev/nbd1 00:08:28.655 /dev/nbd10 00:08:28.655 /dev/nbd11 00:08:28.655 /dev/nbd12 00:08:28.655 /dev/nbd13' 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:28.655 256+0 records in 00:08:28.655 256+0 records out 00:08:28.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124259 s, 84.4 MB/s 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.655 17:57:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:28.914 256+0 records in 00:08:28.914 256+0 records out 00:08:28.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12635 s, 8.3 MB/s 00:08:28.914 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.914 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:28.914 256+0 records in 00:08:28.914 256+0 records out 00:08:28.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144317 s, 7.3 MB/s 00:08:28.914 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.914 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:29.204 256+0 records in 00:08:29.204 256+0 records out 00:08:29.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122674 s, 8.5 MB/s 00:08:29.204 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.204 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:29.204 256+0 records in 00:08:29.204 256+0 records out 00:08:29.204 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129231 s, 8.1 MB/s 00:08:29.204 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.204 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:29.463 256+0 records in 00:08:29.463 256+0 records out 00:08:29.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123132 s, 8.5 MB/s 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:29.463 256+0 records in 00:08:29.463 256+0 records out 00:08:29.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128004 s, 8.2 MB/s 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.463 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:29.723 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.723 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:29.723 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:29.723 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:29.723 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.723 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:29.723 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:29.723 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:29.723 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.723 17:57:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:29.723 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:29.723 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:29.723 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:29.723 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.723 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.723 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:29.983 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:29.983 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.983 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.983 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:29.983 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:29.983 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:29.983 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:29.983 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.983 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.983 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:29.983 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:29.983 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.983 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.983 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:30.242 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:30.242 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:30.242 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:30.242 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.242 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.242 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:30.242 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.242 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.242 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.242 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:30.501 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:30.501 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:30.501 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:30.501 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.501 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.501 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:30.501 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.501 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.501 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.501 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:30.760 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:30.760 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:30.760 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:30.760 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.760 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.760 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:30.760 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.760 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.760 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.760 17:57:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:31.018 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:31.018 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:31.018 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:31.018 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.018 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.018 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:31.018 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:31.018 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.018 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.018 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.018 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:31.276 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:31.534 malloc_lvol_verify 00:08:31.534 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:31.791 28c48664-c63d-4bb8-a723-c9d31b6e4fb9 00:08:31.791 17:58:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:31.791 993493b0-eeb7-4ff2-9d2c-d31e745b6579 00:08:31.791 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:32.049 /dev/nbd0 00:08:32.049 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:32.049 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:32.049 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:32.049 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:32.049 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:32.049 mke2fs 1.47.0 (5-Feb-2023) 00:08:32.049 Discarding device blocks: 0/4096 done 00:08:32.049 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:32.049 00:08:32.049 Allocating group tables: 0/1 done 00:08:32.049 Writing inode tables: 0/1 done 00:08:32.049 Creating journal (1024 blocks): done 00:08:32.049 Writing superblocks and filesystem accounting information: 0/1 done 00:08:32.049 00:08:32.049 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:32.049 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.049 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:32.049 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:32.049 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:32.049 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.049 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:32.307 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.307 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.307 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.307 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.307 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.307 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.307 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.307 17:58:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.307 17:58:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60982 00:08:32.307 17:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 60982 ']' 00:08:32.307 17:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 60982 00:08:32.307 17:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:08:32.307 17:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:32.307 17:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60982 00:08:32.565 17:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:32.565 17:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:32.565 17:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60982' 00:08:32.565 killing process with pid 60982 00:08:32.565 17:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 60982 00:08:32.565 17:58:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 60982 00:08:33.500 17:58:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:33.500 00:08:33.500 real 0m11.217s 00:08:33.500 user 0m14.440s 00:08:33.500 sys 0m4.695s 00:08:33.500 ************************************ 00:08:33.500 END TEST bdev_nbd 00:08:33.500 ************************************ 00:08:33.500 17:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.500 17:58:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:33.758 17:58:02 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:33.758 17:58:02 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:08:33.758 skipping fio tests on NVMe due to multi-ns failures. 00:08:33.758 17:58:02 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:33.758 17:58:02 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:33.758 17:58:02 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:33.758 17:58:02 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:08:33.758 17:58:02 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:33.758 17:58:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.758 ************************************ 00:08:33.758 START TEST bdev_verify 00:08:33.758 ************************************ 00:08:33.758 17:58:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:33.758 [2024-11-05 17:58:02.985164] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:33.758 [2024-11-05 17:58:02.985302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61372 ] 00:08:34.017 [2024-11-05 17:58:03.168789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:34.017 [2024-11-05 17:58:03.287365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.017 [2024-11-05 17:58:03.287396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.953 Running I/O for 5 seconds... 00:08:36.822 20608.00 IOPS, 80.50 MiB/s [2024-11-05T17:58:07.521Z] 21120.00 IOPS, 82.50 MiB/s [2024-11-05T17:58:08.458Z] 21184.00 IOPS, 82.75 MiB/s [2024-11-05T17:58:09.395Z] 22016.00 IOPS, 86.00 MiB/s [2024-11-05T17:58:09.395Z] 21875.20 IOPS, 85.45 MiB/s 00:08:40.072 Latency(us) 00:08:40.072 [2024-11-05T17:58:09.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.072 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.072 Verification LBA range: start 0x0 length 0xbd0bd 00:08:40.072 Nvme0n1 : 5.05 1798.87 7.03 0.00 0.00 70961.77 15370.69 84222.97 00:08:40.072 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.072 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:40.072 Nvme0n1 : 5.05 1797.87 7.02 0.00 0.00 70967.62 12896.64 85065.20 00:08:40.072 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.072 Verification LBA range: start 0x0 length 0xa0000 00:08:40.072 Nvme1n1 : 5.05 1798.43 7.03 0.00 0.00 70847.89 17897.38 76221.79 00:08:40.072 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.072 Verification LBA range: start 0xa0000 length 0xa0000 00:08:40.072 Nvme1n1 : 5.06 1797.17 7.02 0.00 0.00 70864.17 14844.30 77485.13 00:08:40.072 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.072 Verification LBA range: start 0x0 length 0x80000 00:08:40.072 Nvme2n1 : 5.05 1797.97 7.02 0.00 0.00 70653.50 19792.40 64009.46 00:08:40.072 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.072 Verification LBA range: start 0x80000 length 0x80000 00:08:40.072 Nvme2n1 : 5.06 1796.50 7.02 0.00 0.00 70674.08 15791.81 66115.03 00:08:40.072 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.072 Verification LBA range: start 0x0 length 0x80000 00:08:40.072 Nvme2n2 : 5.06 1797.26 7.02 0.00 0.00 70531.63 18844.89 62325.00 00:08:40.072 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.072 Verification LBA range: start 0x80000 length 0x80000 00:08:40.072 Nvme2n2 : 5.07 1804.17 7.05 0.00 0.00 70268.27 6737.84 64851.69 00:08:40.072 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.072 Verification LBA range: start 0x0 length 0x80000 00:08:40.072 Nvme2n3 : 5.08 1812.50 7.08 0.00 0.00 69881.66 11159.54 65272.80 00:08:40.072 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.072 Verification LBA range: start 0x80000 length 0x80000 00:08:40.072 Nvme2n3 : 5.07 1803.52 7.04 0.00 0.00 70148.60 7422.15 65272.80 00:08:40.072 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.072 Verification LBA range: start 0x0 length 0x20000 00:08:40.072 Nvme3n1 : 5.09 1811.98 7.08 0.00 0.00 69761.84 9685.64 66536.15 00:08:40.072 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.072 Verification LBA range: start 0x20000 length 0x20000 00:08:40.072 Nvme3n1 : 5.09 1812.34 7.08 0.00 0.00 69760.40 8580.22 65272.80 00:08:40.072 [2024-11-05T17:58:09.395Z] =================================================================================================================== 00:08:40.072 [2024-11-05T17:58:09.395Z] Total : 21628.58 84.49 0.00 0.00 70440.93 6737.84 85065.20 00:08:41.483 00:08:41.483 real 0m7.634s 00:08:41.483 user 0m14.103s 00:08:41.483 sys 0m0.308s 00:08:41.483 17:58:10 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:41.484 17:58:10 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:41.484 ************************************ 00:08:41.484 END TEST bdev_verify 00:08:41.484 ************************************ 00:08:41.484 17:58:10 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:41.484 17:58:10 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:08:41.484 17:58:10 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.484 17:58:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.484 ************************************ 00:08:41.484 START TEST bdev_verify_big_io 00:08:41.484 ************************************ 00:08:41.484 17:58:10 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:41.484 [2024-11-05 17:58:10.679201] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:41.484 [2024-11-05 17:58:10.679325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61471 ] 00:08:41.742 [2024-11-05 17:58:10.842099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:41.742 [2024-11-05 17:58:10.955348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.742 [2024-11-05 17:58:10.955363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.679 Running I/O for 5 seconds... 00:08:47.113 1904.00 IOPS, 119.00 MiB/s [2024-11-05T17:58:17.813Z] 3248.50 IOPS, 203.03 MiB/s [2024-11-05T17:58:17.813Z] 3866.00 IOPS, 241.62 MiB/s 00:08:48.490 Latency(us) 00:08:48.490 [2024-11-05T17:58:17.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.490 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:48.490 Verification LBA range: start 0x0 length 0xbd0b 00:08:48.490 Nvme0n1 : 5.50 162.98 10.19 0.00 0.00 764735.77 29056.93 788327.02 00:08:48.490 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:48.490 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:48.490 Nvme0n1 : 5.50 162.83 10.18 0.00 0.00 763822.34 22740.20 815278.37 00:08:48.490 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:48.490 Verification LBA range: start 0x0 length 0xa000 00:08:48.490 Nvme1n1 : 5.50 162.92 10.18 0.00 0.00 745788.54 85907.43 673783.78 00:08:48.490 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:48.490 Verification LBA range: start 0xa000 length 0xa000 00:08:48.490 Nvme1n1 : 5.51 162.76 10.17 0.00 0.00 743491.44 84644.09 663677.02 00:08:48.490 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:48.490 Verification LBA range: start 0x0 length 0x8000 00:08:48.490 Nvme2n1 : 5.61 163.26 10.20 0.00 0.00 721088.40 108647.63 687259.45 00:08:48.490 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:48.490 Verification LBA range: start 0x8000 length 0x8000 00:08:48.490 Nvme2n1 : 5.64 170.12 10.63 0.00 0.00 699734.54 28214.70 646832.42 00:08:48.490 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:48.490 Verification LBA range: start 0x0 length 0x8000 00:08:48.490 Nvme2n2 : 5.73 174.66 10.92 0.00 0.00 664965.50 35163.09 700735.13 00:08:48.490 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:48.490 Verification LBA range: start 0x8000 length 0x8000 00:08:48.490 Nvme2n2 : 5.73 174.61 10.91 0.00 0.00 663874.52 33478.63 653570.26 00:08:48.490 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:48.490 Verification LBA range: start 0x0 length 0x8000 00:08:48.490 Nvme2n3 : 5.73 174.45 10.90 0.00 0.00 648954.42 37058.11 727686.48 00:08:48.490 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:48.490 Verification LBA range: start 0x8000 length 0x8000 00:08:48.490 Nvme2n3 : 5.73 174.92 10.93 0.00 0.00 645927.78 34110.30 667045.94 00:08:48.490 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:48.490 Verification LBA range: start 0x0 length 0x2000 00:08:48.491 Nvme3n1 : 5.74 189.56 11.85 0.00 0.00 589062.01 1789.74 744531.07 00:08:48.491 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:48.491 Verification LBA range: start 0x2000 length 0x2000 00:08:48.491 Nvme3n1 : 5.74 182.32 11.40 0.00 0.00 609858.99 2289.81 1441897.28 00:08:48.491 [2024-11-05T17:58:17.814Z] =================================================================================================================== 00:08:48.491 [2024-11-05T17:58:17.814Z] Total : 2055.39 128.46 0.00 0.00 684716.84 1789.74 1441897.28 00:08:50.413 00:08:50.413 real 0m8.789s 00:08:50.413 user 0m16.459s 00:08:50.413 sys 0m0.309s 00:08:50.413 17:58:19 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.413 17:58:19 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:50.413 ************************************ 00:08:50.413 END TEST bdev_verify_big_io 00:08:50.413 ************************************ 00:08:50.413 17:58:19 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:50.413 17:58:19 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:50.414 17:58:19 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.414 17:58:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:50.414 ************************************ 00:08:50.414 START TEST bdev_write_zeroes 00:08:50.414 ************************************ 00:08:50.414 17:58:19 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:50.414 [2024-11-05 17:58:19.547426] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:50.414 [2024-11-05 17:58:19.548067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61586 ] 00:08:50.414 [2024-11-05 17:58:19.727031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.673 [2024-11-05 17:58:19.842237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.241 Running I/O for 1 seconds... 00:08:52.618 77127.00 IOPS, 301.28 MiB/s 00:08:52.618 Latency(us) 00:08:52.618 [2024-11-05T17:58:21.941Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.618 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:52.618 Nvme0n1 : 1.02 12778.76 49.92 0.00 0.00 9990.18 4132.19 30741.38 00:08:52.618 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:52.618 Nvme1n1 : 1.02 12821.71 50.08 0.00 0.00 9945.32 8317.02 31794.17 00:08:52.618 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:52.618 Nvme2n1 : 1.02 12809.34 50.04 0.00 0.00 9918.76 8159.10 29899.16 00:08:52.618 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:52.618 Nvme2n2 : 1.02 12844.91 50.18 0.00 0.00 9847.50 5948.25 24529.94 00:08:52.618 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:52.618 Nvme2n3 : 1.02 12831.85 50.12 0.00 0.00 9821.47 6264.08 22529.64 00:08:52.618 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:52.618 Nvme3n1 : 1.03 12861.45 50.24 0.00 0.00 9772.45 3974.27 20318.79 00:08:52.618 [2024-11-05T17:58:21.941Z] =================================================================================================================== 00:08:52.618 [2024-11-05T17:58:21.941Z] Total : 76948.02 300.58 0.00 0.00 9882.28 3974.27 31794.17 00:08:53.558 00:08:53.558 real 0m3.181s 00:08:53.558 user 0m2.800s 00:08:53.558 sys 0m0.266s 00:08:53.558 17:58:22 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:53.558 17:58:22 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:53.558 ************************************ 00:08:53.558 END TEST bdev_write_zeroes 00:08:53.558 ************************************ 00:08:53.558 17:58:22 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:53.558 17:58:22 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:53.558 17:58:22 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:53.558 17:58:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.558 ************************************ 00:08:53.558 START TEST bdev_json_nonenclosed 00:08:53.558 ************************************ 00:08:53.558 17:58:22 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:53.559 [2024-11-05 17:58:22.806830] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:53.559 [2024-11-05 17:58:22.806958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61640 ] 00:08:53.818 [2024-11-05 17:58:22.987003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.818 [2024-11-05 17:58:23.090518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.818 [2024-11-05 17:58:23.090621] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:53.818 [2024-11-05 17:58:23.090654] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:53.818 [2024-11-05 17:58:23.090671] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:54.077 00:08:54.077 real 0m0.613s 00:08:54.077 user 0m0.360s 00:08:54.077 sys 0m0.148s 00:08:54.077 17:58:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:54.077 17:58:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:54.077 ************************************ 00:08:54.077 END TEST bdev_json_nonenclosed 00:08:54.077 ************************************ 00:08:54.077 17:58:23 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:54.077 17:58:23 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:54.077 17:58:23 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.077 17:58:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.335 ************************************ 00:08:54.335 START TEST bdev_json_nonarray 00:08:54.335 ************************************ 00:08:54.335 17:58:23 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:54.335 [2024-11-05 17:58:23.494514] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:54.335 [2024-11-05 17:58:23.494660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61664 ] 00:08:54.594 [2024-11-05 17:58:23.676201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.594 [2024-11-05 17:58:23.779664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.594 [2024-11-05 17:58:23.779795] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:54.594 [2024-11-05 17:58:23.779829] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:54.594 [2024-11-05 17:58:23.779847] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:54.853 00:08:54.853 real 0m0.622s 00:08:54.853 user 0m0.379s 00:08:54.853 sys 0m0.139s 00:08:54.853 17:58:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:54.853 17:58:24 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:54.853 ************************************ 00:08:54.853 END TEST bdev_json_nonarray 00:08:54.853 ************************************ 00:08:54.853 17:58:24 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:08:54.853 17:58:24 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:08:54.853 17:58:24 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:08:54.853 17:58:24 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:54.853 17:58:24 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:08:54.853 17:58:24 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:54.853 17:58:24 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:54.853 17:58:24 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:54.853 17:58:24 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:54.853 17:58:24 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:54.853 17:58:24 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:54.853 00:08:54.853 real 0m42.131s 00:08:54.853 user 1m2.215s 00:08:54.853 sys 0m7.731s 00:08:54.853 17:58:24 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:54.853 17:58:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.853 ************************************ 00:08:54.853 END TEST blockdev_nvme 00:08:54.853 ************************************ 00:08:54.853 17:58:24 -- spdk/autotest.sh@209 -- # uname -s 00:08:54.853 17:58:24 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:54.853 17:58:24 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:54.853 17:58:24 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:54.853 17:58:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.853 17:58:24 -- common/autotest_common.sh@10 -- # set +x 00:08:54.853 ************************************ 00:08:54.853 START TEST blockdev_nvme_gpt 00:08:54.853 ************************************ 00:08:54.853 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:55.113 * Looking for test storage... 00:08:55.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.113 17:58:24 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:55.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.113 --rc genhtml_branch_coverage=1 00:08:55.113 --rc genhtml_function_coverage=1 00:08:55.113 --rc genhtml_legend=1 00:08:55.113 --rc geninfo_all_blocks=1 00:08:55.113 --rc geninfo_unexecuted_blocks=1 00:08:55.113 00:08:55.113 ' 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:55.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.113 --rc genhtml_branch_coverage=1 00:08:55.113 --rc genhtml_function_coverage=1 00:08:55.113 --rc genhtml_legend=1 00:08:55.113 --rc geninfo_all_blocks=1 00:08:55.113 --rc geninfo_unexecuted_blocks=1 00:08:55.113 00:08:55.113 ' 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:55.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.113 --rc genhtml_branch_coverage=1 00:08:55.113 --rc genhtml_function_coverage=1 00:08:55.113 --rc genhtml_legend=1 00:08:55.113 --rc geninfo_all_blocks=1 00:08:55.113 --rc geninfo_unexecuted_blocks=1 00:08:55.113 00:08:55.113 ' 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:55.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.113 --rc genhtml_branch_coverage=1 00:08:55.113 --rc genhtml_function_coverage=1 00:08:55.113 --rc genhtml_legend=1 00:08:55.113 --rc geninfo_all_blocks=1 00:08:55.113 --rc geninfo_unexecuted_blocks=1 00:08:55.113 00:08:55.113 ' 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61748 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:55.113 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61748 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 61748 ']' 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:55.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:55.113 17:58:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:55.373 [2024-11-05 17:58:24.532596] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:08:55.373 [2024-11-05 17:58:24.532718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61748 ] 00:08:55.633 [2024-11-05 17:58:24.711349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.633 [2024-11-05 17:58:24.824872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.570 17:58:25 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:56.570 17:58:25 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:08:56.570 17:58:25 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:56.570 17:58:25 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:08:56.570 17:58:25 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:56.829 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:57.088 Waiting for block devices as requested 00:08:57.355 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:57.355 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:57.355 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:57.623 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:02.893 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:09:02.893 BYT; 00:09:02.893 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:09:02.893 BYT; 00:09:02.893 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:09:02.893 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:09:02.893 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:02.893 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:09:02.893 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:09:02.893 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:02.893 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:02.893 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:02.893 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:02.893 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:09:02.893 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:09:02.893 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:02.893 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:02.893 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:09:02.894 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:09:02.894 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:02.894 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:02.894 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:02.894 17:58:31 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:02.894 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:02.894 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:09:03.827 The operation has completed successfully. 00:09:03.828 17:58:33 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:09:04.762 The operation has completed successfully. 00:09:04.762 17:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:05.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:06.267 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:06.268 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:06.268 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:06.268 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:06.527 17:58:35 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:09:06.527 17:58:35 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.527 17:58:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:06.527 [] 00:09:06.527 17:58:35 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.527 17:58:35 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:09:06.527 17:58:35 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:06.527 17:58:35 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:06.527 17:58:35 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:06.527 17:58:35 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:06.527 17:58:35 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.527 17:58:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:06.785 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.785 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:09:06.785 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.785 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:06.785 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.785 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:09:06.785 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:09:06.785 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.785 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:06.785 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.785 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:09:06.785 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.785 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:06.785 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.786 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:06.786 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.786 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:07.045 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.045 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:09:07.045 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:09:07.045 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:09:07.045 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.045 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:07.045 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.045 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:09:07.045 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "1c7ca940-d6a6-435b-a4c3-528c53267402"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1c7ca940-d6a6-435b-a4c3-528c53267402",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "61b256e6-6bd1-4b77-a068-bb2ab80e4bbd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "61b256e6-6bd1-4b77-a068-bb2ab80e4bbd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "03b81d7e-d79f-4459-865b-a63bb55a6f9f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "03b81d7e-d79f-4459-865b-a63bb55a6f9f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:09:07.046 ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "75a65d7f-5cc1-473c-b9c4-ea6f5d4956f3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "75a65d7f-5cc1-473c-b9c4-ea6f5d4956f3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ff6fb379-c040-485b-9551-63fab6ed1982"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ff6fb379-c040-485b-9551-63fab6ed1982",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:07.046 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:09:07.046 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:09:07.046 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:09:07.046 17:58:36 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61748 00:09:07.046 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 61748 ']' 00:09:07.046 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 61748 00:09:07.046 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:09:07.046 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:07.046 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61748 00:09:07.046 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:07.046 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:07.046 killing process with pid 61748 00:09:07.046 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61748' 00:09:07.046 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 61748 00:09:07.046 17:58:36 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 61748 00:09:09.583 17:58:38 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:09.583 17:58:38 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:09.583 17:58:38 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:09:09.583 17:58:38 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:09.583 17:58:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:09.583 ************************************ 00:09:09.583 START TEST bdev_hello_world 00:09:09.583 ************************************ 00:09:09.583 17:58:38 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:09.583 [2024-11-05 17:58:38.806363] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:09.583 [2024-11-05 17:58:38.806492] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62392 ] 00:09:09.842 [2024-11-05 17:58:38.986959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.842 [2024-11-05 17:58:39.104462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.778 [2024-11-05 17:58:39.756632] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:10.778 [2024-11-05 17:58:39.756681] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:10.778 [2024-11-05 17:58:39.756707] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:10.778 [2024-11-05 17:58:39.759732] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:10.778 [2024-11-05 17:58:39.760453] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:10.778 [2024-11-05 17:58:39.760489] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:10.778 [2024-11-05 17:58:39.760718] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:10.778 00:09:10.778 [2024-11-05 17:58:39.760743] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:11.715 00:09:11.715 real 0m2.165s 00:09:11.715 user 0m1.799s 00:09:11.715 sys 0m0.258s 00:09:11.715 17:58:40 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:11.715 ************************************ 00:09:11.715 END TEST bdev_hello_world 00:09:11.715 ************************************ 00:09:11.715 17:58:40 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:11.715 17:58:40 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:09:11.715 17:58:40 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:11.715 17:58:40 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:11.715 17:58:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:11.715 ************************************ 00:09:11.715 START TEST bdev_bounds 00:09:11.715 ************************************ 00:09:11.715 17:58:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:09:11.715 17:58:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62440 00:09:11.715 17:58:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:11.715 17:58:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:11.715 Process bdevio pid: 62440 00:09:11.715 17:58:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62440' 00:09:11.715 17:58:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62440 00:09:11.715 17:58:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 62440 ']' 00:09:11.715 17:58:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.715 17:58:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:11.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.715 17:58:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.715 17:58:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:11.715 17:58:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:11.973 [2024-11-05 17:58:41.060656] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:11.973 [2024-11-05 17:58:41.060803] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62440 ] 00:09:11.973 [2024-11-05 17:58:41.248253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:12.232 [2024-11-05 17:58:41.388715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.232 [2024-11-05 17:58:41.388879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.232 [2024-11-05 17:58:41.388910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.800 17:58:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:12.800 17:58:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:09:12.800 17:58:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:13.059 I/O targets: 00:09:13.059 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:13.059 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:09:13.059 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:09:13.059 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:13.059 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:13.059 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:13.059 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:13.059 00:09:13.059 00:09:13.059 CUnit - A unit testing framework for C - Version 2.1-3 00:09:13.059 http://cunit.sourceforge.net/ 00:09:13.059 00:09:13.059 00:09:13.059 Suite: bdevio tests on: Nvme3n1 00:09:13.059 Test: blockdev write read block ...passed 00:09:13.059 Test: blockdev write zeroes read block ...passed 00:09:13.059 Test: blockdev write zeroes read no split ...passed 00:09:13.059 Test: blockdev write zeroes read split ...passed 00:09:13.059 Test: blockdev write zeroes read split partial ...passed 00:09:13.059 Test: blockdev reset ...[2024-11-05 17:58:42.304030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:13.059 [2024-11-05 17:58:42.308026] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:09:13.059 passed 00:09:13.059 Test: blockdev write read 8 blocks ...passed 00:09:13.059 Test: blockdev write read size > 128k ...passed 00:09:13.059 Test: blockdev write read invalid size ...passed 00:09:13.059 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:13.059 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:13.059 Test: blockdev write read max offset ...passed 00:09:13.059 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:13.059 Test: blockdev writev readv 8 blocks ...passed 00:09:13.059 Test: blockdev writev readv 30 x 1block ...passed 00:09:13.059 Test: blockdev writev readv block ...passed 00:09:13.059 Test: blockdev writev readv size > 128k ...passed 00:09:13.059 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:13.059 Test: blockdev comparev and writev ...[2024-11-05 17:58:42.317804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8804000 len:0x1000 00:09:13.059 [2024-11-05 17:58:42.317963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:13.059 passed 00:09:13.059 Test: blockdev nvme passthru rw ...passed 00:09:13.059 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:58:42.319203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:13.059 [2024-11-05 17:58:42.319386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:09:13.059 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:09:13.059 passed 00:09:13.059 Test: blockdev copy ...passed 00:09:13.059 Suite: bdevio tests on: Nvme2n3 00:09:13.059 Test: blockdev write read block ...passed 00:09:13.059 Test: blockdev write zeroes read block ...passed 00:09:13.059 Test: blockdev write zeroes read no split ...passed 00:09:13.059 Test: blockdev write zeroes read split ...passed 00:09:13.319 Test: blockdev write zeroes read split partial ...passed 00:09:13.319 Test: blockdev reset ...[2024-11-05 17:58:42.398228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:13.319 [2024-11-05 17:58:42.402307] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:13.319 passed 00:09:13.319 Test: blockdev write read 8 blocks ...passed 00:09:13.319 Test: blockdev write read size > 128k ...passed 00:09:13.319 Test: blockdev write read invalid size ...passed 00:09:13.319 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:13.319 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:13.319 Test: blockdev write read max offset ...passed 00:09:13.319 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:13.319 Test: blockdev writev readv 8 blocks ...passed 00:09:13.319 Test: blockdev writev readv 30 x 1block ...passed 00:09:13.319 Test: blockdev writev readv block ...passed 00:09:13.319 Test: blockdev writev readv size > 128k ...passed 00:09:13.319 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:13.319 Test: blockdev comparev and writev ...[2024-11-05 17:58:42.410448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8802000 len:0x1000 00:09:13.319 [2024-11-05 17:58:42.410498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:13.319 passed 00:09:13.319 Test: blockdev nvme passthru rw ...passed 00:09:13.319 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:58:42.411430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:13.319 [2024-11-05 17:58:42.411465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:13.319 passed 00:09:13.319 Test: blockdev nvme admin passthru ...passed 00:09:13.319 Test: blockdev copy ...passed 00:09:13.319 Suite: bdevio tests on: Nvme2n2 00:09:13.319 Test: blockdev write read block ...passed 00:09:13.319 Test: blockdev write zeroes read block ...passed 00:09:13.319 Test: blockdev write zeroes read no split ...passed 00:09:13.319 Test: blockdev write zeroes read split ...passed 00:09:13.319 Test: blockdev write zeroes read split partial ...passed 00:09:13.319 Test: blockdev reset ...[2024-11-05 17:58:42.490738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:13.319 [2024-11-05 17:58:42.494894] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:13.319 passed 00:09:13.319 Test: blockdev write read 8 blocks ...passed 00:09:13.319 Test: blockdev write read size > 128k ...passed 00:09:13.319 Test: blockdev write read invalid size ...passed 00:09:13.319 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:13.319 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:13.319 Test: blockdev write read max offset ...passed 00:09:13.319 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:13.319 Test: blockdev writev readv 8 blocks ...passed 00:09:13.319 Test: blockdev writev readv 30 x 1block ...passed 00:09:13.319 Test: blockdev writev readv block ...passed 00:09:13.319 Test: blockdev writev readv size > 128k ...passed 00:09:13.319 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:13.319 Test: blockdev comparev and writev ...[2024-11-05 17:58:42.503787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cb638000 len:0x1000 00:09:13.319 [2024-11-05 17:58:42.503832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:13.319 passed 00:09:13.319 Test: blockdev nvme passthru rw ...passed 00:09:13.319 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:58:42.504785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:13.319 [2024-11-05 17:58:42.504819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:13.319 passed 00:09:13.319 Test: blockdev nvme admin passthru ...passed 00:09:13.319 Test: blockdev copy ...passed 00:09:13.319 Suite: bdevio tests on: Nvme2n1 00:09:13.319 Test: blockdev write read block ...passed 00:09:13.319 Test: blockdev write zeroes read block ...passed 00:09:13.319 Test: blockdev write zeroes read no split ...passed 00:09:13.319 Test: blockdev write zeroes read split ...passed 00:09:13.319 Test: blockdev write zeroes read split partial ...passed 00:09:13.319 Test: blockdev reset ...[2024-11-05 17:58:42.579271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:13.319 [2024-11-05 17:58:42.583148] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:13.319 passed 00:09:13.319 Test: blockdev write read 8 blocks ...passed 00:09:13.319 Test: blockdev write read size > 128k ...passed 00:09:13.319 Test: blockdev write read invalid size ...passed 00:09:13.319 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:13.319 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:13.319 Test: blockdev write read max offset ...passed 00:09:13.319 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:13.319 Test: blockdev writev readv 8 blocks ...passed 00:09:13.319 Test: blockdev writev readv 30 x 1block ...passed 00:09:13.319 Test: blockdev writev readv block ...passed 00:09:13.319 Test: blockdev writev readv size > 128k ...passed 00:09:13.319 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:13.319 Test: blockdev comparev and writev ...[2024-11-05 17:58:42.591495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cb634000 len:0x1000 00:09:13.319 [2024-11-05 17:58:42.591541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:13.319 passed 00:09:13.319 Test: blockdev nvme passthru rw ...passed 00:09:13.320 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:58:42.592423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:13.320 [2024-11-05 17:58:42.592456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:13.320 passed 00:09:13.320 Test: blockdev nvme admin passthru ...passed 00:09:13.320 Test: blockdev copy ...passed 00:09:13.320 Suite: bdevio tests on: Nvme1n1p2 00:09:13.320 Test: blockdev write read block ...passed 00:09:13.320 Test: blockdev write zeroes read block ...passed 00:09:13.320 Test: blockdev write zeroes read no split ...passed 00:09:13.320 Test: blockdev write zeroes read split ...passed 00:09:13.579 Test: blockdev write zeroes read split partial ...passed 00:09:13.579 Test: blockdev reset ...[2024-11-05 17:58:42.671797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:13.579 [2024-11-05 17:58:42.675341] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:13.579 passed 00:09:13.579 Test: blockdev write read 8 blocks ...passed 00:09:13.579 Test: blockdev write read size > 128k ...passed 00:09:13.579 Test: blockdev write read invalid size ...passed 00:09:13.579 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:13.579 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:13.579 Test: blockdev write read max offset ...passed 00:09:13.579 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:13.579 Test: blockdev writev readv 8 blocks ...passed 00:09:13.579 Test: blockdev writev readv 30 x 1block ...passed 00:09:13.579 Test: blockdev writev readv block ...passed 00:09:13.579 Test: blockdev writev readv size > 128k ...passed 00:09:13.579 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:13.579 Test: blockdev comparev and writev ...[2024-11-05 17:58:42.684629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cb630000 len:0x1000 00:09:13.579 [2024-11-05 17:58:42.684675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:13.579 passed 00:09:13.579 Test: blockdev nvme passthru rw ...passed 00:09:13.579 Test: blockdev nvme passthru vendor specific ...passed 00:09:13.579 Test: blockdev nvme admin passthru ...passed 00:09:13.579 Test: blockdev copy ...passed 00:09:13.579 Suite: bdevio tests on: Nvme1n1p1 00:09:13.579 Test: blockdev write read block ...passed 00:09:13.579 Test: blockdev write zeroes read block ...passed 00:09:13.579 Test: blockdev write zeroes read no split ...passed 00:09:13.579 Test: blockdev write zeroes read split ...passed 00:09:13.579 Test: blockdev write zeroes read split partial ...passed 00:09:13.579 Test: blockdev reset ...[2024-11-05 17:58:42.784589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:13.579 [2024-11-05 17:58:42.788598] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:13.579 passed 00:09:13.579 Test: blockdev write read 8 blocks ...passed 00:09:13.579 Test: blockdev write read size > 128k ...passed 00:09:13.579 Test: blockdev write read invalid size ...passed 00:09:13.579 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:13.579 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:13.579 Test: blockdev write read max offset ...passed 00:09:13.579 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:13.579 Test: blockdev writev readv 8 blocks ...passed 00:09:13.579 Test: blockdev writev readv 30 x 1block ...passed 00:09:13.579 Test: blockdev writev readv block ...passed 00:09:13.579 Test: blockdev writev readv size > 128k ...passed 00:09:13.579 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:13.579 Test: blockdev comparev and writev ...[2024-11-05 17:58:42.797707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b920e000 len:0x1000 00:09:13.579 [2024-11-05 17:58:42.797753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:13.579 passed 00:09:13.579 Test: blockdev nvme passthru rw ...passed 00:09:13.579 Test: blockdev nvme passthru vendor specific ...passed 00:09:13.579 Test: blockdev nvme admin passthru ...passed 00:09:13.579 Test: blockdev copy ...passed 00:09:13.579 Suite: bdevio tests on: Nvme0n1 00:09:13.579 Test: blockdev write read block ...passed 00:09:13.579 Test: blockdev write zeroes read block ...passed 00:09:13.579 Test: blockdev write zeroes read no split ...passed 00:09:13.579 Test: blockdev write zeroes read split ...passed 00:09:13.579 Test: blockdev write zeroes read split partial ...passed 00:09:13.579 Test: blockdev reset ...[2024-11-05 17:58:42.867340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:13.579 [2024-11-05 17:58:42.871145] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:13.579 passed 00:09:13.579 Test: blockdev write read 8 blocks ...passed 00:09:13.579 Test: blockdev write read size > 128k ...passed 00:09:13.579 Test: blockdev write read invalid size ...passed 00:09:13.579 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:13.579 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:13.579 Test: blockdev write read max offset ...passed 00:09:13.579 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:13.579 Test: blockdev writev readv 8 blocks ...passed 00:09:13.579 Test: blockdev writev readv 30 x 1block ...passed 00:09:13.579 Test: blockdev writev readv block ...passed 00:09:13.579 Test: blockdev writev readv size > 128k ...passed 00:09:13.579 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:13.579 Test: blockdev comparev and writev ...[2024-11-05 17:58:42.879856] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:13.579 separate metadata which is not supported yet. 00:09:13.579 passed 00:09:13.579 Test: blockdev nvme passthru rw ...passed 00:09:13.579 Test: blockdev nvme passthru vendor specific ...[2024-11-05 17:58:42.880618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:13.579 [2024-11-05 17:58:42.880668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:13.579 passed 00:09:13.579 Test: blockdev nvme admin passthru ...passed 00:09:13.579 Test: blockdev copy ...passed 00:09:13.579 00:09:13.579 Run Summary: Type Total Ran Passed Failed Inactive 00:09:13.579 suites 7 7 n/a 0 0 00:09:13.579 tests 161 161 161 0 0 00:09:13.579 asserts 1025 1025 1025 0 n/a 00:09:13.579 00:09:13.579 Elapsed time = 1.779 seconds 00:09:13.579 0 00:09:13.839 17:58:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62440 00:09:13.839 17:58:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 62440 ']' 00:09:13.839 17:58:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 62440 00:09:13.839 17:58:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:09:13.839 17:58:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:13.839 17:58:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62440 00:09:13.839 17:58:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:13.839 17:58:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:13.839 killing process with pid 62440 00:09:13.839 17:58:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62440' 00:09:13.839 17:58:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 62440 00:09:13.839 17:58:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 62440 00:09:14.778 17:58:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:14.778 00:09:14.778 real 0m3.043s 00:09:14.778 user 0m7.928s 00:09:14.778 sys 0m0.401s 00:09:14.778 17:58:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:14.778 17:58:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:14.778 ************************************ 00:09:14.778 END TEST bdev_bounds 00:09:14.778 ************************************ 00:09:14.778 17:58:44 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:14.778 17:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:14.778 17:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:14.778 17:58:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:14.778 ************************************ 00:09:14.778 START TEST bdev_nbd 00:09:14.778 ************************************ 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62500 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62500 /var/tmp/spdk-nbd.sock 00:09:14.778 17:58:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 62500 ']' 00:09:14.779 17:58:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:14.779 17:58:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:14.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:14.779 17:58:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:14.779 17:58:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:14.779 17:58:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:15.053 [2024-11-05 17:58:44.177151] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:15.053 [2024-11-05 17:58:44.177272] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.053 [2024-11-05 17:58:44.360974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.313 [2024-11-05 17:58:44.480731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:16.251 1+0 records in 00:09:16.251 1+0 records out 00:09:16.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420634 s, 9.7 MB/s 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:16.251 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:16.511 1+0 records in 00:09:16.511 1+0 records out 00:09:16.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641628 s, 6.4 MB/s 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:16.511 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:16.771 1+0 records in 00:09:16.771 1+0 records out 00:09:16.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703735 s, 5.8 MB/s 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:16.771 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.030 1+0 records in 00:09:17.030 1+0 records out 00:09:17.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662176 s, 6.2 MB/s 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:17.030 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.289 1+0 records in 00:09:17.289 1+0 records out 00:09:17.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678933 s, 6.0 MB/s 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:17.289 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.548 1+0 records in 00:09:17.548 1+0 records out 00:09:17.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000885634 s, 4.6 MB/s 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:17.548 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:17.808 1+0 records in 00:09:17.808 1+0 records out 00:09:17.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000845387 s, 4.8 MB/s 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:17.808 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:18.069 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:18.069 { 00:09:18.069 "nbd_device": "/dev/nbd0", 00:09:18.069 "bdev_name": "Nvme0n1" 00:09:18.069 }, 00:09:18.069 { 00:09:18.069 "nbd_device": "/dev/nbd1", 00:09:18.069 "bdev_name": "Nvme1n1p1" 00:09:18.069 }, 00:09:18.069 { 00:09:18.069 "nbd_device": "/dev/nbd2", 00:09:18.069 "bdev_name": "Nvme1n1p2" 00:09:18.069 }, 00:09:18.069 { 00:09:18.069 "nbd_device": "/dev/nbd3", 00:09:18.069 "bdev_name": "Nvme2n1" 00:09:18.069 }, 00:09:18.069 { 00:09:18.069 "nbd_device": "/dev/nbd4", 00:09:18.069 "bdev_name": "Nvme2n2" 00:09:18.069 }, 00:09:18.069 { 00:09:18.069 "nbd_device": "/dev/nbd5", 00:09:18.069 "bdev_name": "Nvme2n3" 00:09:18.069 }, 00:09:18.069 { 00:09:18.069 "nbd_device": "/dev/nbd6", 00:09:18.069 "bdev_name": "Nvme3n1" 00:09:18.069 } 00:09:18.069 ]' 00:09:18.069 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:18.069 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:18.069 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:18.069 { 00:09:18.069 "nbd_device": "/dev/nbd0", 00:09:18.069 "bdev_name": "Nvme0n1" 00:09:18.069 }, 00:09:18.069 { 00:09:18.069 "nbd_device": "/dev/nbd1", 00:09:18.069 "bdev_name": "Nvme1n1p1" 00:09:18.069 }, 00:09:18.069 { 00:09:18.069 "nbd_device": "/dev/nbd2", 00:09:18.069 "bdev_name": "Nvme1n1p2" 00:09:18.069 }, 00:09:18.069 { 00:09:18.069 "nbd_device": "/dev/nbd3", 00:09:18.069 "bdev_name": "Nvme2n1" 00:09:18.069 }, 00:09:18.069 { 00:09:18.069 "nbd_device": "/dev/nbd4", 00:09:18.069 "bdev_name": "Nvme2n2" 00:09:18.069 }, 00:09:18.069 { 00:09:18.069 "nbd_device": "/dev/nbd5", 00:09:18.069 "bdev_name": "Nvme2n3" 00:09:18.069 }, 00:09:18.069 { 00:09:18.069 "nbd_device": "/dev/nbd6", 00:09:18.069 "bdev_name": "Nvme3n1" 00:09:18.069 } 00:09:18.070 ]' 00:09:18.070 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:18.070 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.070 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:18.070 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:18.070 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:18.070 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.070 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:18.329 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:18.329 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:18.329 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:18.329 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.329 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.329 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:18.329 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:18.329 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.329 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.329 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:18.588 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:18.588 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:18.588 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:18.588 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.588 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.588 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:18.588 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:18.588 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.588 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.588 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:18.848 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:18.848 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:18.848 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:18.848 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.848 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.848 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:18.848 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:18.848 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.848 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.848 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:19.107 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:19.107 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:19.107 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:19.107 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.108 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.108 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:19.108 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:19.108 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.108 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.108 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:19.366 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:19.625 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:19.625 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:19.625 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:19.625 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:19.625 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:19.625 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:19.625 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:19.625 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:19.625 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:19.625 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:19.625 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:19.883 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:19.883 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:19.883 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:20.140 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:20.141 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:20.141 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:20.141 /dev/nbd0 00:09:20.399 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:20.399 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:20.399 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:09:20.399 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:20.399 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:20.399 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:20.399 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:09:20.400 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:20.400 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:20.400 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:20.400 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:20.400 1+0 records in 00:09:20.400 1+0 records out 00:09:20.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524153 s, 7.8 MB/s 00:09:20.400 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:20.400 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:20.400 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:20.400 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:20.400 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:20.400 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:20.400 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:20.400 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:20.400 /dev/nbd1 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:20.659 1+0 records in 00:09:20.659 1+0 records out 00:09:20.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623313 s, 6.6 MB/s 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:20.659 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:20.659 /dev/nbd10 00:09:20.918 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:20.918 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:20.918 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:20.918 1+0 records in 00:09:20.918 1+0 records out 00:09:20.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565573 s, 7.2 MB/s 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:20.918 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:20.918 /dev/nbd11 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:21.177 1+0 records in 00:09:21.177 1+0 records out 00:09:21.177 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000850629 s, 4.8 MB/s 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:21.177 /dev/nbd12 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:21.177 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:09:21.436 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:21.436 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:21.436 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:21.437 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:21.437 1+0 records in 00:09:21.437 1+0 records out 00:09:21.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770888 s, 5.3 MB/s 00:09:21.437 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:21.437 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:21.437 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:21.437 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:21.437 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:21.437 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:21.437 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:21.437 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:21.437 /dev/nbd13 00:09:21.437 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:21.696 1+0 records in 00:09:21.696 1+0 records out 00:09:21.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000905883 s, 4.5 MB/s 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:21.696 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:21.696 /dev/nbd14 00:09:21.696 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:21.955 1+0 records in 00:09:21.955 1+0 records out 00:09:21.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798301 s, 5.1 MB/s 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:21.955 { 00:09:21.955 "nbd_device": "/dev/nbd0", 00:09:21.955 "bdev_name": "Nvme0n1" 00:09:21.955 }, 00:09:21.955 { 00:09:21.955 "nbd_device": "/dev/nbd1", 00:09:21.955 "bdev_name": "Nvme1n1p1" 00:09:21.955 }, 00:09:21.955 { 00:09:21.955 "nbd_device": "/dev/nbd10", 00:09:21.955 "bdev_name": "Nvme1n1p2" 00:09:21.955 }, 00:09:21.955 { 00:09:21.955 "nbd_device": "/dev/nbd11", 00:09:21.955 "bdev_name": "Nvme2n1" 00:09:21.955 }, 00:09:21.955 { 00:09:21.955 "nbd_device": "/dev/nbd12", 00:09:21.955 "bdev_name": "Nvme2n2" 00:09:21.955 }, 00:09:21.955 { 00:09:21.955 "nbd_device": "/dev/nbd13", 00:09:21.955 "bdev_name": "Nvme2n3" 00:09:21.955 }, 00:09:21.955 { 00:09:21.955 "nbd_device": "/dev/nbd14", 00:09:21.955 "bdev_name": "Nvme3n1" 00:09:21.955 } 00:09:21.955 ]' 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:21.955 { 00:09:21.955 "nbd_device": "/dev/nbd0", 00:09:21.955 "bdev_name": "Nvme0n1" 00:09:21.955 }, 00:09:21.955 { 00:09:21.955 "nbd_device": "/dev/nbd1", 00:09:21.955 "bdev_name": "Nvme1n1p1" 00:09:21.955 }, 00:09:21.955 { 00:09:21.955 "nbd_device": "/dev/nbd10", 00:09:21.955 "bdev_name": "Nvme1n1p2" 00:09:21.955 }, 00:09:21.955 { 00:09:21.955 "nbd_device": "/dev/nbd11", 00:09:21.955 "bdev_name": "Nvme2n1" 00:09:21.955 }, 00:09:21.955 { 00:09:21.955 "nbd_device": "/dev/nbd12", 00:09:21.955 "bdev_name": "Nvme2n2" 00:09:21.955 }, 00:09:21.955 { 00:09:21.955 "nbd_device": "/dev/nbd13", 00:09:21.955 "bdev_name": "Nvme2n3" 00:09:21.955 }, 00:09:21.955 { 00:09:21.955 "nbd_device": "/dev/nbd14", 00:09:21.955 "bdev_name": "Nvme3n1" 00:09:21.955 } 00:09:21.955 ]' 00:09:21.955 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:22.215 /dev/nbd1 00:09:22.215 /dev/nbd10 00:09:22.215 /dev/nbd11 00:09:22.215 /dev/nbd12 00:09:22.215 /dev/nbd13 00:09:22.215 /dev/nbd14' 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:22.215 /dev/nbd1 00:09:22.215 /dev/nbd10 00:09:22.215 /dev/nbd11 00:09:22.215 /dev/nbd12 00:09:22.215 /dev/nbd13 00:09:22.215 /dev/nbd14' 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:22.215 256+0 records in 00:09:22.215 256+0 records out 00:09:22.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123561 s, 84.9 MB/s 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:22.215 256+0 records in 00:09:22.215 256+0 records out 00:09:22.215 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135285 s, 7.8 MB/s 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:22.215 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:22.474 256+0 records in 00:09:22.474 256+0 records out 00:09:22.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.142455 s, 7.4 MB/s 00:09:22.474 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:22.474 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:22.474 256+0 records in 00:09:22.474 256+0 records out 00:09:22.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143027 s, 7.3 MB/s 00:09:22.474 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:22.474 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:22.733 256+0 records in 00:09:22.733 256+0 records out 00:09:22.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144101 s, 7.3 MB/s 00:09:22.733 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:22.733 17:58:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:22.993 256+0 records in 00:09:22.993 256+0 records out 00:09:22.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141783 s, 7.4 MB/s 00:09:22.993 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:22.993 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:22.993 256+0 records in 00:09:22.993 256+0 records out 00:09:22.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141475 s, 7.4 MB/s 00:09:22.993 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:22.993 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:23.252 256+0 records in 00:09:23.252 256+0 records out 00:09:23.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138801 s, 7.6 MB/s 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:23.252 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.253 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:23.512 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:23.512 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:23.512 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:23.512 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.512 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.512 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:23.512 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:23.512 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.512 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.512 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:23.771 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:23.771 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:23.771 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:23.771 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.771 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.771 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:23.771 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:23.771 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.771 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.771 17:58:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.030 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:24.289 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:24.289 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:24.289 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:24.289 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.289 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.290 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:24.290 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.290 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.290 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.290 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:24.549 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:24.549 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:24.549 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:24.549 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.549 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.549 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:24.549 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.549 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.549 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:24.549 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:24.808 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:24.808 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:24.808 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:24.808 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:24.808 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:24.808 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:24.808 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:24.808 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:24.808 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:24.808 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:24.808 17:58:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:25.067 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:25.067 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:25.067 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:25.067 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:25.067 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:25.067 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:25.067 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:25.067 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:25.067 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:25.067 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:25.067 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:25.067 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:25.067 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:25.068 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.068 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:25.068 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:25.326 malloc_lvol_verify 00:09:25.326 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:25.326 1c05ffc3-fe38-4482-a7a1-4ad62b2aafe0 00:09:25.585 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:25.585 a2bff580-bf38-42c2-9332-8b117facb811 00:09:25.585 17:58:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:25.845 /dev/nbd0 00:09:25.845 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:25.845 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:25.845 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:25.845 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:25.845 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:25.845 mke2fs 1.47.0 (5-Feb-2023) 00:09:25.845 Discarding device blocks: 0/4096 done 00:09:25.845 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:25.845 00:09:25.845 Allocating group tables: 0/1 done 00:09:25.845 Writing inode tables: 0/1 done 00:09:25.845 Creating journal (1024 blocks): done 00:09:25.845 Writing superblocks and filesystem accounting information: 0/1 done 00:09:25.845 00:09:25.845 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:25.845 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.845 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:25.845 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:25.845 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:25.845 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:25.845 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62500 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 62500 ']' 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 62500 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62500 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:26.104 killing process with pid 62500 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62500' 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 62500 00:09:26.104 17:58:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 62500 00:09:27.481 17:58:56 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:27.481 00:09:27.481 real 0m12.456s 00:09:27.481 user 0m15.988s 00:09:27.481 sys 0m5.337s 00:09:27.481 17:58:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:27.481 17:58:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:27.481 ************************************ 00:09:27.481 END TEST bdev_nbd 00:09:27.481 ************************************ 00:09:27.481 17:58:56 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:09:27.481 17:58:56 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:09:27.481 17:58:56 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:09:27.481 skipping fio tests on NVMe due to multi-ns failures. 00:09:27.481 17:58:56 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:27.481 17:58:56 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:27.481 17:58:56 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:27.481 17:58:56 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:09:27.481 17:58:56 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:27.481 17:58:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:27.481 ************************************ 00:09:27.481 START TEST bdev_verify 00:09:27.481 ************************************ 00:09:27.481 17:58:56 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:27.481 [2024-11-05 17:58:56.693036] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:27.481 [2024-11-05 17:58:56.693165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62934 ] 00:09:27.740 [2024-11-05 17:58:56.874694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.740 [2024-11-05 17:58:56.990435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.740 [2024-11-05 17:58:56.990491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.674 Running I/O for 5 seconds... 00:09:31.009 20544.00 IOPS, 80.25 MiB/s [2024-11-05T17:59:01.267Z] 21632.00 IOPS, 84.50 MiB/s [2024-11-05T17:59:01.834Z] 21568.00 IOPS, 84.25 MiB/s [2024-11-05T17:59:03.213Z] 21024.00 IOPS, 82.12 MiB/s [2024-11-05T17:59:03.213Z] 21593.60 IOPS, 84.35 MiB/s 00:09:33.890 Latency(us) 00:09:33.890 [2024-11-05T17:59:03.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.890 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:33.890 Verification LBA range: start 0x0 length 0xbd0bd 00:09:33.890 Nvme0n1 : 5.07 1528.82 5.97 0.00 0.00 83335.75 12001.77 91381.92 00:09:33.890 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:33.890 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:33.890 Nvme0n1 : 5.04 1497.27 5.85 0.00 0.00 85178.10 20108.23 94329.73 00:09:33.890 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:33.890 Verification LBA range: start 0x0 length 0x4ff80 00:09:33.890 Nvme1n1p1 : 5.07 1528.19 5.97 0.00 0.00 83205.50 12633.45 84222.97 00:09:33.890 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:33.890 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:33.890 Nvme1n1p1 : 5.05 1496.84 5.85 0.00 0.00 85043.43 21476.86 87591.89 00:09:33.890 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:33.890 Verification LBA range: start 0x0 length 0x4ff7f 00:09:33.890 Nvme1n1p2 : 5.08 1535.90 6.00 0.00 0.00 82772.88 12422.89 71589.53 00:09:33.890 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:33.890 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:33.890 Nvme1n1p2 : 5.09 1509.29 5.90 0.00 0.00 84237.13 12686.09 74537.33 00:09:33.890 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:33.890 Verification LBA range: start 0x0 length 0x80000 00:09:33.890 Nvme2n1 : 5.08 1535.50 6.00 0.00 0.00 82658.47 11949.13 65272.80 00:09:33.890 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:33.890 Verification LBA range: start 0x80000 length 0x80000 00:09:33.890 Nvme2n1 : 5.09 1508.77 5.89 0.00 0.00 84066.66 12949.28 71589.53 00:09:33.890 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:33.890 Verification LBA range: start 0x0 length 0x80000 00:09:33.890 Nvme2n2 : 5.09 1535.08 6.00 0.00 0.00 82529.13 12317.61 65272.80 00:09:33.890 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:33.890 Verification LBA range: start 0x80000 length 0x80000 00:09:33.890 Nvme2n2 : 5.09 1508.45 5.89 0.00 0.00 83919.66 12317.61 74116.22 00:09:33.890 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:33.890 Verification LBA range: start 0x0 length 0x80000 00:09:33.890 Nvme2n3 : 5.09 1534.70 5.99 0.00 0.00 82391.42 12107.05 63167.23 00:09:33.890 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:33.890 Verification LBA range: start 0x80000 length 0x80000 00:09:33.890 Nvme2n3 : 5.09 1508.13 5.89 0.00 0.00 83777.45 12212.33 74537.33 00:09:33.890 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:33.890 Verification LBA range: start 0x0 length 0x20000 00:09:33.890 Nvme3n1 : 5.09 1534.35 5.99 0.00 0.00 82255.05 12317.61 66115.03 00:09:33.890 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:33.890 Verification LBA range: start 0x20000 length 0x20000 00:09:33.890 Nvme3n1 : 5.09 1507.79 5.89 0.00 0.00 83647.19 12107.05 75800.67 00:09:33.890 [2024-11-05T17:59:03.214Z] =================================================================================================================== 00:09:33.891 [2024-11-05T17:59:03.214Z] Total : 21269.07 83.08 0.00 0.00 83491.38 11949.13 94329.73 00:09:35.269 00:09:35.269 real 0m7.605s 00:09:35.269 user 0m14.085s 00:09:35.269 sys 0m0.289s 00:09:35.269 17:59:04 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:35.269 17:59:04 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:35.269 ************************************ 00:09:35.269 END TEST bdev_verify 00:09:35.269 ************************************ 00:09:35.269 17:59:04 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:35.269 17:59:04 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:09:35.269 17:59:04 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:35.269 17:59:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:35.269 ************************************ 00:09:35.269 START TEST bdev_verify_big_io 00:09:35.269 ************************************ 00:09:35.269 17:59:04 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:35.269 [2024-11-05 17:59:04.353965] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:35.269 [2024-11-05 17:59:04.354096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63032 ] 00:09:35.269 [2024-11-05 17:59:04.515726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:35.527 [2024-11-05 17:59:04.635815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.527 [2024-11-05 17:59:04.635845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.464 Running I/O for 5 seconds... 00:09:40.903 2254.00 IOPS, 140.88 MiB/s [2024-11-05T17:59:11.164Z] 2842.50 IOPS, 177.66 MiB/s [2024-11-05T17:59:11.423Z] 2939.00 IOPS, 183.69 MiB/s 00:09:42.100 Latency(us) 00:09:42.100 [2024-11-05T17:59:11.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.100 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:42.100 Verification LBA range: start 0x0 length 0xbd0b 00:09:42.100 Nvme0n1 : 5.67 137.16 8.57 0.00 0.00 892952.88 26109.12 1293664.85 00:09:42.100 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:42.100 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:42.100 Nvme0n1 : 5.62 148.10 9.26 0.00 0.00 831997.88 21266.30 862443.23 00:09:42.100 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:42.100 Verification LBA range: start 0x0 length 0x4ff8 00:09:42.100 Nvme1n1p1 : 5.60 139.94 8.75 0.00 0.00 865772.62 40637.58 1313878.36 00:09:42.100 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:42.100 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:42.100 Nvme1n1p1 : 5.62 153.73 9.61 0.00 0.00 796450.54 42322.04 811909.45 00:09:42.100 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:42.100 Verification LBA range: start 0x0 length 0x4ff7 00:09:42.100 Nvme1n1p2 : 5.67 143.09 8.94 0.00 0.00 830846.13 56008.28 1347567.55 00:09:42.100 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:42.100 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:42.100 Nvme1n1p2 : 5.68 144.31 9.02 0.00 0.00 823602.58 43164.27 1239762.15 00:09:42.100 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:42.100 Verification LBA range: start 0x0 length 0x8000 00:09:42.100 Nvme2n1 : 5.73 143.51 8.97 0.00 0.00 804444.65 70747.30 1361043.23 00:09:42.100 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:42.100 Verification LBA range: start 0x8000 length 0x8000 00:09:42.100 Nvme2n1 : 5.68 156.39 9.77 0.00 0.00 749445.30 45269.85 815278.37 00:09:42.100 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:42.100 Verification LBA range: start 0x0 length 0x8000 00:09:42.100 Nvme2n2 : 5.75 152.12 9.51 0.00 0.00 748269.37 21476.86 1381256.74 00:09:42.100 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:42.100 Verification LBA range: start 0x8000 length 0x8000 00:09:42.100 Nvme2n2 : 5.73 156.84 9.80 0.00 0.00 725554.88 58956.08 832122.96 00:09:42.100 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:42.100 Verification LBA range: start 0x0 length 0x8000 00:09:42.100 Nvme2n3 : 5.78 157.82 9.86 0.00 0.00 704558.75 24319.38 1408208.09 00:09:42.100 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:42.100 Verification LBA range: start 0x8000 length 0x8000 00:09:42.100 Nvme2n3 : 5.75 166.59 10.41 0.00 0.00 673852.79 22108.53 848967.56 00:09:42.100 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:42.100 Verification LBA range: start 0x0 length 0x2000 00:09:42.100 Nvme3n1 : 5.83 191.60 11.97 0.00 0.00 569311.30 855.39 855705.39 00:09:42.100 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:42.100 Verification LBA range: start 0x2000 length 0x2000 00:09:42.100 Nvme3n1 : 5.78 181.00 11.31 0.00 0.00 607792.89 3500.52 784958.10 00:09:42.100 [2024-11-05T17:59:11.423Z] =================================================================================================================== 00:09:42.100 [2024-11-05T17:59:11.424Z] Total : 2172.19 135.76 0.00 0.00 749347.69 855.39 1408208.09 00:09:44.008 00:09:44.008 real 0m8.943s 00:09:44.008 user 0m16.801s 00:09:44.008 sys 0m0.287s 00:09:44.008 17:59:13 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:44.008 17:59:13 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:44.008 ************************************ 00:09:44.008 END TEST bdev_verify_big_io 00:09:44.008 ************************************ 00:09:44.008 17:59:13 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:44.008 17:59:13 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:44.008 17:59:13 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:44.008 17:59:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:44.008 ************************************ 00:09:44.008 START TEST bdev_write_zeroes 00:09:44.008 ************************************ 00:09:44.008 17:59:13 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:44.267 [2024-11-05 17:59:13.382802] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:44.267 [2024-11-05 17:59:13.382919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63152 ] 00:09:44.267 [2024-11-05 17:59:13.564459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.526 [2024-11-05 17:59:13.667775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.094 Running I/O for 1 seconds... 00:09:46.473 69440.00 IOPS, 271.25 MiB/s 00:09:46.473 Latency(us) 00:09:46.473 [2024-11-05T17:59:15.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.473 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:46.473 Nvme0n1 : 1.02 9900.58 38.67 0.00 0.00 12894.62 10422.59 32004.73 00:09:46.473 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:46.473 Nvme1n1p1 : 1.02 9889.70 38.63 0.00 0.00 12894.03 10633.15 31794.17 00:09:46.473 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:46.473 Nvme1n1p2 : 1.02 9879.33 38.59 0.00 0.00 12859.88 10317.31 30109.71 00:09:46.473 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:46.473 Nvme2n1 : 1.02 9870.07 38.55 0.00 0.00 12824.04 10369.95 28846.37 00:09:46.473 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:46.473 Nvme2n2 : 1.03 9860.56 38.52 0.00 0.00 12795.05 10527.87 27793.58 00:09:46.473 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:46.473 Nvme2n3 : 1.03 9908.12 38.70 0.00 0.00 12732.81 6790.48 25688.01 00:09:46.473 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:46.473 Nvme3n1 : 1.03 9899.39 38.67 0.00 0.00 12701.11 7001.03 23687.71 00:09:46.473 [2024-11-05T17:59:15.796Z] =================================================================================================================== 00:09:46.473 [2024-11-05T17:59:15.796Z] Total : 69207.76 270.34 0.00 0.00 12814.33 6790.48 32004.73 00:09:47.214 00:09:47.214 real 0m3.199s 00:09:47.214 user 0m2.822s 00:09:47.214 sys 0m0.262s 00:09:47.214 17:59:16 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.214 17:59:16 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:47.214 ************************************ 00:09:47.214 END TEST bdev_write_zeroes 00:09:47.214 ************************************ 00:09:47.472 17:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:47.472 17:59:16 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:47.472 17:59:16 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.472 17:59:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:47.472 ************************************ 00:09:47.472 START TEST bdev_json_nonenclosed 00:09:47.472 ************************************ 00:09:47.472 17:59:16 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:47.472 [2024-11-05 17:59:16.653762] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:47.472 [2024-11-05 17:59:16.653888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63205 ] 00:09:47.731 [2024-11-05 17:59:16.831971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.731 [2024-11-05 17:59:16.952682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.731 [2024-11-05 17:59:16.952806] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:47.731 [2024-11-05 17:59:16.952829] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:47.731 [2024-11-05 17:59:16.952841] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:47.989 00:09:47.990 real 0m0.632s 00:09:47.990 user 0m0.393s 00:09:47.990 sys 0m0.134s 00:09:47.990 17:59:17 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.990 17:59:17 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:47.990 ************************************ 00:09:47.990 END TEST bdev_json_nonenclosed 00:09:47.990 ************************************ 00:09:47.990 17:59:17 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:47.990 17:59:17 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:09:47.990 17:59:17 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.990 17:59:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:47.990 ************************************ 00:09:47.990 START TEST bdev_json_nonarray 00:09:47.990 ************************************ 00:09:47.990 17:59:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:48.248 [2024-11-05 17:59:17.363718] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:48.248 [2024-11-05 17:59:17.363834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63232 ] 00:09:48.248 [2024-11-05 17:59:17.545438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.507 [2024-11-05 17:59:17.658772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.507 [2024-11-05 17:59:17.658893] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:48.507 [2024-11-05 17:59:17.658915] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:48.507 [2024-11-05 17:59:17.658928] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:48.766 00:09:48.766 real 0m0.637s 00:09:48.766 user 0m0.386s 00:09:48.766 sys 0m0.146s 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:48.766 ************************************ 00:09:48.766 END TEST bdev_json_nonarray 00:09:48.766 ************************************ 00:09:48.766 17:59:17 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:09:48.766 17:59:17 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:09:48.766 17:59:17 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:48.766 17:59:17 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:48.766 17:59:17 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:48.766 17:59:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:48.766 ************************************ 00:09:48.766 START TEST bdev_gpt_uuid 00:09:48.766 ************************************ 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63257 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63257 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 63257 ']' 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:48.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:48.766 17:59:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:49.024 [2024-11-05 17:59:18.094571] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:09:49.024 [2024-11-05 17:59:18.094699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63257 ] 00:09:49.024 [2024-11-05 17:59:18.276759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.284 [2024-11-05 17:59:18.389544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.221 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:50.221 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:09:50.221 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:50.221 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.221 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:50.480 Some configs were skipped because the RPC state that can call them passed over. 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:09:50.480 { 00:09:50.480 "name": "Nvme1n1p1", 00:09:50.480 "aliases": [ 00:09:50.480 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:50.480 ], 00:09:50.480 "product_name": "GPT Disk", 00:09:50.480 "block_size": 4096, 00:09:50.480 "num_blocks": 655104, 00:09:50.480 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:50.480 "assigned_rate_limits": { 00:09:50.480 "rw_ios_per_sec": 0, 00:09:50.480 "rw_mbytes_per_sec": 0, 00:09:50.480 "r_mbytes_per_sec": 0, 00:09:50.480 "w_mbytes_per_sec": 0 00:09:50.480 }, 00:09:50.480 "claimed": false, 00:09:50.480 "zoned": false, 00:09:50.480 "supported_io_types": { 00:09:50.480 "read": true, 00:09:50.480 "write": true, 00:09:50.480 "unmap": true, 00:09:50.480 "flush": true, 00:09:50.480 "reset": true, 00:09:50.480 "nvme_admin": false, 00:09:50.480 "nvme_io": false, 00:09:50.480 "nvme_io_md": false, 00:09:50.480 "write_zeroes": true, 00:09:50.480 "zcopy": false, 00:09:50.480 "get_zone_info": false, 00:09:50.480 "zone_management": false, 00:09:50.480 "zone_append": false, 00:09:50.480 "compare": true, 00:09:50.480 "compare_and_write": false, 00:09:50.480 "abort": true, 00:09:50.480 "seek_hole": false, 00:09:50.480 "seek_data": false, 00:09:50.480 "copy": true, 00:09:50.480 "nvme_iov_md": false 00:09:50.480 }, 00:09:50.480 "driver_specific": { 00:09:50.480 "gpt": { 00:09:50.480 "base_bdev": "Nvme1n1", 00:09:50.480 "offset_blocks": 256, 00:09:50.480 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:50.480 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:50.480 "partition_name": "SPDK_TEST_first" 00:09:50.480 } 00:09:50.480 } 00:09:50.480 } 00:09:50.480 ]' 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.480 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:09:50.480 { 00:09:50.480 "name": "Nvme1n1p2", 00:09:50.480 "aliases": [ 00:09:50.480 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:50.480 ], 00:09:50.480 "product_name": "GPT Disk", 00:09:50.480 "block_size": 4096, 00:09:50.480 "num_blocks": 655103, 00:09:50.480 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:50.480 "assigned_rate_limits": { 00:09:50.480 "rw_ios_per_sec": 0, 00:09:50.480 "rw_mbytes_per_sec": 0, 00:09:50.481 "r_mbytes_per_sec": 0, 00:09:50.481 "w_mbytes_per_sec": 0 00:09:50.481 }, 00:09:50.481 "claimed": false, 00:09:50.481 "zoned": false, 00:09:50.481 "supported_io_types": { 00:09:50.481 "read": true, 00:09:50.481 "write": true, 00:09:50.481 "unmap": true, 00:09:50.481 "flush": true, 00:09:50.481 "reset": true, 00:09:50.481 "nvme_admin": false, 00:09:50.481 "nvme_io": false, 00:09:50.481 "nvme_io_md": false, 00:09:50.481 "write_zeroes": true, 00:09:50.481 "zcopy": false, 00:09:50.481 "get_zone_info": false, 00:09:50.481 "zone_management": false, 00:09:50.481 "zone_append": false, 00:09:50.481 "compare": true, 00:09:50.481 "compare_and_write": false, 00:09:50.481 "abort": true, 00:09:50.481 "seek_hole": false, 00:09:50.481 "seek_data": false, 00:09:50.481 "copy": true, 00:09:50.481 "nvme_iov_md": false 00:09:50.481 }, 00:09:50.481 "driver_specific": { 00:09:50.481 "gpt": { 00:09:50.481 "base_bdev": "Nvme1n1", 00:09:50.481 "offset_blocks": 655360, 00:09:50.481 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:50.481 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:50.481 "partition_name": "SPDK_TEST_second" 00:09:50.481 } 00:09:50.481 } 00:09:50.481 } 00:09:50.481 ]' 00:09:50.481 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:09:50.739 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:09:50.739 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:09:50.739 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:50.739 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:50.739 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:50.739 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63257 00:09:50.739 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 63257 ']' 00:09:50.739 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 63257 00:09:50.739 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:09:50.739 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:50.739 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63257 00:09:50.739 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:50.740 killing process with pid 63257 00:09:50.740 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:50.740 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63257' 00:09:50.740 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 63257 00:09:50.740 17:59:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 63257 00:09:53.271 00:09:53.271 real 0m4.254s 00:09:53.271 user 0m4.359s 00:09:53.271 sys 0m0.517s 00:09:53.271 17:59:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:53.271 17:59:22 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:53.271 ************************************ 00:09:53.271 END TEST bdev_gpt_uuid 00:09:53.271 ************************************ 00:09:53.271 17:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:09:53.271 17:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:53.271 17:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:09:53.271 17:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:53.271 17:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:53.271 17:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:53.271 17:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:53.271 17:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:53.271 17:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:53.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:53.788 Waiting for block devices as requested 00:09:54.047 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:54.047 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:54.305 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:54.305 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.623 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:59.623 17:59:28 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:59.623 17:59:28 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:59.623 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:59.623 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:59.623 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:59.623 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:59.623 17:59:28 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:59.623 00:09:59.623 real 1m4.691s 00:09:59.623 user 1m20.565s 00:09:59.623 sys 0m11.962s 00:09:59.623 17:59:28 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:59.623 17:59:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:59.623 ************************************ 00:09:59.623 END TEST blockdev_nvme_gpt 00:09:59.623 ************************************ 00:09:59.623 17:59:28 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:59.623 17:59:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:59.623 17:59:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.623 17:59:28 -- common/autotest_common.sh@10 -- # set +x 00:09:59.623 ************************************ 00:09:59.623 START TEST nvme 00:09:59.623 ************************************ 00:09:59.623 17:59:28 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:59.882 * Looking for test storage... 00:09:59.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:59.882 17:59:29 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:59.882 17:59:29 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:09:59.882 17:59:29 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:59.882 17:59:29 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:59.882 17:59:29 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.882 17:59:29 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.882 17:59:29 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.882 17:59:29 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.882 17:59:29 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.882 17:59:29 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.882 17:59:29 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.882 17:59:29 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.882 17:59:29 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.882 17:59:29 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.882 17:59:29 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.882 17:59:29 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:59.882 17:59:29 nvme -- scripts/common.sh@345 -- # : 1 00:09:59.882 17:59:29 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.882 17:59:29 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.882 17:59:29 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:59.882 17:59:29 nvme -- scripts/common.sh@353 -- # local d=1 00:09:59.882 17:59:29 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.882 17:59:29 nvme -- scripts/common.sh@355 -- # echo 1 00:09:59.882 17:59:29 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.882 17:59:29 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:59.882 17:59:29 nvme -- scripts/common.sh@353 -- # local d=2 00:09:59.882 17:59:29 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.882 17:59:29 nvme -- scripts/common.sh@355 -- # echo 2 00:09:59.882 17:59:29 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.882 17:59:29 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.882 17:59:29 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.882 17:59:29 nvme -- scripts/common.sh@368 -- # return 0 00:09:59.882 17:59:29 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.882 17:59:29 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:59.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.882 --rc genhtml_branch_coverage=1 00:09:59.882 --rc genhtml_function_coverage=1 00:09:59.882 --rc genhtml_legend=1 00:09:59.882 --rc geninfo_all_blocks=1 00:09:59.882 --rc geninfo_unexecuted_blocks=1 00:09:59.882 00:09:59.882 ' 00:09:59.882 17:59:29 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:59.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.882 --rc genhtml_branch_coverage=1 00:09:59.882 --rc genhtml_function_coverage=1 00:09:59.882 --rc genhtml_legend=1 00:09:59.882 --rc geninfo_all_blocks=1 00:09:59.882 --rc geninfo_unexecuted_blocks=1 00:09:59.882 00:09:59.882 ' 00:09:59.882 17:59:29 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:59.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.882 --rc genhtml_branch_coverage=1 00:09:59.882 --rc genhtml_function_coverage=1 00:09:59.882 --rc genhtml_legend=1 00:09:59.882 --rc geninfo_all_blocks=1 00:09:59.882 --rc geninfo_unexecuted_blocks=1 00:09:59.882 00:09:59.882 ' 00:09:59.882 17:59:29 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:59.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.882 --rc genhtml_branch_coverage=1 00:09:59.882 --rc genhtml_function_coverage=1 00:09:59.882 --rc genhtml_legend=1 00:09:59.882 --rc geninfo_all_blocks=1 00:09:59.882 --rc geninfo_unexecuted_blocks=1 00:09:59.882 00:09:59.882 ' 00:09:59.882 17:59:29 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:00.827 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:01.399 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.399 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.399 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.399 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.659 17:59:30 nvme -- nvme/nvme.sh@79 -- # uname 00:10:01.659 17:59:30 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:01.659 17:59:30 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:01.659 17:59:30 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:01.659 17:59:30 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:01.659 17:59:30 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:10:01.659 17:59:30 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:10:01.659 Waiting for stub to ready for secondary processes... 00:10:01.659 17:59:30 nvme -- common/autotest_common.sh@1073 -- # stubpid=63923 00:10:01.659 17:59:30 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:10:01.659 17:59:30 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:01.659 17:59:30 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/63923 ]] 00:10:01.659 17:59:30 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:10:01.659 17:59:30 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:01.659 [2024-11-05 17:59:30.814925] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:10:01.659 [2024-11-05 17:59:30.815072] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:02.597 17:59:31 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:02.597 17:59:31 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/63923 ]] 00:10:02.597 17:59:31 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:10:02.597 [2024-11-05 17:59:31.837530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:02.857 [2024-11-05 17:59:31.943602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.857 [2024-11-05 17:59:31.943746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.857 [2024-11-05 17:59:31.943775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.857 [2024-11-05 17:59:31.961082] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:02.857 [2024-11-05 17:59:31.961123] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:02.857 [2024-11-05 17:59:31.976792] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:02.857 [2024-11-05 17:59:31.977070] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:02.857 [2024-11-05 17:59:31.981533] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:02.857 [2024-11-05 17:59:31.981965] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:02.857 [2024-11-05 17:59:31.982152] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:02.857 [2024-11-05 17:59:31.986177] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:02.857 [2024-11-05 17:59:31.986456] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:02.857 [2024-11-05 17:59:31.986588] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:02.857 [2024-11-05 17:59:31.990727] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:02.857 [2024-11-05 17:59:31.991002] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:02.857 [2024-11-05 17:59:31.991139] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:02.857 [2024-11-05 17:59:31.991251] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:02.857 [2024-11-05 17:59:31.991355] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:03.796 17:59:32 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:03.796 17:59:32 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:10:03.796 done. 00:10:03.796 17:59:32 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:03.796 17:59:32 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:10:03.796 17:59:32 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.796 17:59:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:03.796 ************************************ 00:10:03.796 START TEST nvme_reset 00:10:03.796 ************************************ 00:10:03.796 17:59:32 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:03.796 Initializing NVMe Controllers 00:10:03.796 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:03.796 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:03.796 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:03.796 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:03.796 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:03.796 00:10:03.796 real 0m0.282s 00:10:03.796 user 0m0.091s 00:10:03.796 sys 0m0.150s 00:10:03.796 17:59:33 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.796 17:59:33 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:03.796 ************************************ 00:10:03.796 END TEST nvme_reset 00:10:03.796 ************************************ 00:10:04.055 17:59:33 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:04.055 17:59:33 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:04.055 17:59:33 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.055 17:59:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:04.055 ************************************ 00:10:04.055 START TEST nvme_identify 00:10:04.055 ************************************ 00:10:04.055 17:59:33 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:10:04.055 17:59:33 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:04.055 17:59:33 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:04.055 17:59:33 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:04.055 17:59:33 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:04.055 17:59:33 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:04.055 17:59:33 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:10:04.055 17:59:33 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:04.055 17:59:33 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:04.055 17:59:33 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:04.055 17:59:33 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:04.055 17:59:33 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:04.055 17:59:33 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:04.317 ===================================================== 00:10:04.317 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:04.317 ===================================================== 00:10:04.317 Controller Capabilities/Features 00:10:04.317 ================================ 00:10:04.317 Vendor ID: 1b36 00:10:04.317 Subsystem Vendor ID: 1af4 00:10:04.317 Serial Number: 12340 00:10:04.317 Model Number: QEMU NVMe Ctrl 00:10:04.317 Firmware Version: 8.0.0 00:10:04.317 Recommended Arb Burst: 6 00:10:04.317 IEEE OUI Identifier: 00 54 52 00:10:04.317 Multi-path I/O 00:10:04.317 May have multiple subsystem ports: No 00:10:04.317 May have multiple controllers: No 00:10:04.317 Associated with SR-IOV VF: No 00:10:04.317 Max Data Transfer Size: 524288 00:10:04.317 Max Number of Namespaces: 256 00:10:04.317 Max Number of I/O Queues: 64 00:10:04.317 NVMe Specification Version (VS): 1.4 00:10:04.317 NVMe Specification Version (Identify): 1.4 00:10:04.317 Maximum Queue Entries: 2048 00:10:04.317 Contiguous Queues Required: Yes 00:10:04.317 Arbitration Mechanisms Supported 00:10:04.318 Weighted Round Robin: Not Supported 00:10:04.318 Vendor Specific: Not Supported 00:10:04.318 Reset Timeout: 7500 ms 00:10:04.318 Doorbell Stride: 4 bytes 00:10:04.318 NVM Subsystem Reset: Not Supported 00:10:04.318 Command Sets Supported 00:10:04.318 NVM Command Set: Supported 00:10:04.318 Boot Partition: Not Supported 00:10:04.318 Memory Page Size Minimum: 4096 bytes 00:10:04.318 Memory Page Size Maximum: 65536 bytes 00:10:04.318 Persistent Memory Region: Not Supported 00:10:04.318 Optional Asynchronous Events Supported 00:10:04.318 Namespace Attribute Notices: Supported 00:10:04.318 Firmware Activation Notices: Not Supported 00:10:04.318 ANA Change Notices: Not Supported 00:10:04.318 PLE Aggregate Log Change Notices: Not Supported 00:10:04.318 LBA Status Info Alert Notices: Not Supported 00:10:04.318 EGE Aggregate Log Change Notices: Not Supported 00:10:04.318 Normal NVM Subsystem Shutdown event: Not Supported 00:10:04.318 Zone Descriptor Change Notices: Not Supported 00:10:04.318 Discovery Log Change Notices: Not Supported 00:10:04.318 Controller Attributes 00:10:04.318 128-bit Host Identifier: Not Supported 00:10:04.318 Non-Operational Permissive Mode: Not Supported 00:10:04.318 NVM Sets: Not Supported 00:10:04.318 Read Recovery Levels: Not Supported 00:10:04.318 Endurance Groups: Not Supported 00:10:04.318 Predictable Latency Mode: Not Supported 00:10:04.318 Traffic Based Keep ALive: Not Supported 00:10:04.318 Namespace Granularity: Not Supported 00:10:04.318 SQ Associations: Not Supported 00:10:04.318 UUID List: Not Supported 00:10:04.318 Multi-Domain Subsystem: Not Supported 00:10:04.318 Fixed Capacity Management: Not Supported 00:10:04.318 Variable Capacity Management: Not Supported 00:10:04.318 Delete Endurance Group: Not Supported 00:10:04.318 Delete NVM Set: Not Supported 00:10:04.318 Extended LBA Formats Supported: Supported 00:10:04.318 Flexible Data Placement Supported: Not Supported 00:10:04.318 00:10:04.318 Controller Memory Buffer Support 00:10:04.318 ================================ 00:10:04.318 Supported: No 00:10:04.318 00:10:04.318 Persistent Memory Region Support 00:10:04.318 ================================ 00:10:04.318 Supported: No 00:10:04.318 00:10:04.318 Admin Command Set Attributes 00:10:04.318 ============================ 00:10:04.318 Security Send/Receive: Not Supported 00:10:04.318 Format NVM: Supported 00:10:04.318 Firmware Activate/Download: Not Supported 00:10:04.318 Namespace Management: Supported 00:10:04.318 Device Self-Test: Not Supported 00:10:04.318 Directives: Supported 00:10:04.318 NVMe-MI: Not Supported 00:10:04.318 Virtualization Management: Not Supported 00:10:04.318 Doorbell Buffer Config: Supported 00:10:04.318 Get LBA Status Capability: Not Supported 00:10:04.318 Command & Feature Lockdown Capability: Not Supported 00:10:04.318 Abort Command Limit: 4 00:10:04.318 Async Event Request Limit: 4 00:10:04.318 Number of Firmware Slots: N/A 00:10:04.318 Firmware Slot 1 Read-Only: N/A 00:10:04.318 Firmware Activation Without Reset: N/A 00:10:04.318 Multiple Update Detection Support: N/A 00:10:04.318 Firmware Update Granularity: No Information Provided 00:10:04.318 Per-Namespace SMART Log: Yes 00:10:04.318 Asymmetric Namespace Access Log Page: Not Supported 00:10:04.318 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:04.318 Command Effects Log Page: Supported 00:10:04.318 Get Log Page Extended Data: Supported 00:10:04.318 Telemetry Log Pages: Not Supported 00:10:04.318 Persistent Event Log Pages: Not Supported 00:10:04.318 Supported Log Pages Log Page: May Support 00:10:04.318 Commands Supported & Effects Log Page: Not Supported 00:10:04.318 Feature Identifiers & Effects Log Page:May Support 00:10:04.318 NVMe-MI Commands & Effects Log Page: May Support 00:10:04.318 Data Area 4 for Telemetry Log: Not Supported 00:10:04.318 Error Log Page Entries Supported: 1 00:10:04.318 Keep Alive: Not Supported 00:10:04.318 00:10:04.318 NVM Command Set Attributes 00:10:04.318 ========================== 00:10:04.318 Submission Queue Entry Size 00:10:04.318 Max: 64 00:10:04.318 Min: 64 00:10:04.318 Completion Queue Entry Size 00:10:04.318 Max: 16 00:10:04.318 Min: 16 00:10:04.318 Number of Namespaces: 256 00:10:04.318 Compare Command: Supported 00:10:04.318 Write Uncorrectable Command: Not Supported 00:10:04.318 Dataset Management Command: Supported 00:10:04.318 Write Zeroes Command: Supported 00:10:04.318 Set Features Save Field: Supported 00:10:04.318 Reservations: Not Supported 00:10:04.318 Timestamp: Supported 00:10:04.318 Copy: Supported 00:10:04.318 Volatile Write Cache: Present 00:10:04.318 Atomic Write Unit (Normal): 1 00:10:04.318 Atomic Write Unit (PFail): 1 00:10:04.318 Atomic Compare & Write Unit: 1 00:10:04.318 Fused Compare & Write: Not Supported 00:10:04.318 Scatter-Gather List 00:10:04.318 SGL Command Set: Supported 00:10:04.318 SGL Keyed: Not Supported 00:10:04.318 SGL Bit Bucket Descriptor: Not Supported 00:10:04.318 SGL Metadata Pointer: Not Supported 00:10:04.318 Oversized SGL: Not Supported 00:10:04.318 SGL Metadata Address: Not Supported 00:10:04.318 SGL Offset: Not Supported 00:10:04.318 Transport SGL Data Block: Not Supported 00:10:04.318 Replay Protected Memory Block: Not Supported 00:10:04.318 00:10:04.318 Firmware Slot Information 00:10:04.318 ========================= 00:10:04.318 Active slot: 1 00:10:04.318 Slot 1 Firmware Revision: 1.0 00:10:04.318 00:10:04.318 00:10:04.318 Commands Supported and Effects 00:10:04.318 ============================== 00:10:04.318 Admin Commands 00:10:04.318 -------------- 00:10:04.318 Delete I/O Submission Queue (00h): Supported 00:10:04.318 Create I/O Submission Queue (01h): Supported 00:10:04.318 Get Log Page (02h): Supported 00:10:04.318 Delete I/O Completion Queue (04h): Supported 00:10:04.318 Create I/O Completion Queue (05h): Supported 00:10:04.318 Identify (06h): Supported 00:10:04.318 Abort (08h): Supported 00:10:04.318 Set Features (09h): Supported 00:10:04.318 Get Features (0Ah): Supported 00:10:04.318 Asynchronous Event Request (0Ch): Supported 00:10:04.318 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:04.318 Directive Send (19h): Supported 00:10:04.318 Directive Receive (1Ah): Supported 00:10:04.318 Virtualization Management (1Ch): Supported 00:10:04.318 Doorbell Buffer Config (7Ch): Supported 00:10:04.318 Format NVM (80h): Supported LBA-Change 00:10:04.318 I/O Commands 00:10:04.318 ------------ 00:10:04.318 Flush (00h): Supported LBA-Change 00:10:04.318 Write (01h): Supported LBA-Change 00:10:04.318 Read (02h): Supported 00:10:04.318 Compare (05h): Supported 00:10:04.318 Write Zeroes (08h): Supported LBA-Change 00:10:04.318 Dataset Management (09h): Supported LBA-Change 00:10:04.318 Unknown (0Ch): Supported 00:10:04.318 Unknown (12h): Supported 00:10:04.318 Copy (19h): Supported LBA-Change 00:10:04.318 Unknown (1Dh): Supported LBA-Change 00:10:04.318 00:10:04.318 Error Log 00:10:04.318 ========= 00:10:04.318 00:10:04.318 Arbitration 00:10:04.318 =========== 00:10:04.318 Arbitration Burst: no limit 00:10:04.318 00:10:04.318 Power Management 00:10:04.318 ================ 00:10:04.318 Number of Power States: 1 00:10:04.318 Current Power State: Power State #0 00:10:04.318 Power State #0: 00:10:04.318 Max Power: 25.00 W 00:10:04.318 Non-Operational State: Operational 00:10:04.318 Entry Latency: 16 microseconds 00:10:04.318 Exit Latency: 4 microseconds 00:10:04.318 Relative Read Throughput: 0 00:10:04.318 Relative Read Latency: 0 00:10:04.318 Relative Write Throughput: 0 00:10:04.318 Relative Write Latency: 0 00:10:04.318 Idle Power[2024-11-05 17:59:33.472696] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 63956 terminated unexpected 00:10:04.318 [2024-11-05 17:59:33.473779] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 63956 terminated unexpected 00:10:04.318 : Not Reported 00:10:04.318 Active Power: Not Reported 00:10:04.318 Non-Operational Permissive Mode: Not Supported 00:10:04.318 00:10:04.318 Health Information 00:10:04.318 ================== 00:10:04.318 Critical Warnings: 00:10:04.318 Available Spare Space: OK 00:10:04.318 Temperature: OK 00:10:04.318 Device Reliability: OK 00:10:04.318 Read Only: No 00:10:04.318 Volatile Memory Backup: OK 00:10:04.318 Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.318 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:04.318 Available Spare: 0% 00:10:04.318 Available Spare Threshold: 0% 00:10:04.318 Life Percentage Used: 0% 00:10:04.318 Data Units Read: 784 00:10:04.318 Data Units Written: 712 00:10:04.318 Host Read Commands: 37886 00:10:04.318 Host Write Commands: 37672 00:10:04.318 Controller Busy Time: 0 minutes 00:10:04.318 Power Cycles: 0 00:10:04.318 Power On Hours: 0 hours 00:10:04.318 Unsafe Shutdowns: 0 00:10:04.318 Unrecoverable Media Errors: 0 00:10:04.319 Lifetime Error Log Entries: 0 00:10:04.319 Warning Temperature Time: 0 minutes 00:10:04.319 Critical Temperature Time: 0 minutes 00:10:04.319 00:10:04.319 Number of Queues 00:10:04.319 ================ 00:10:04.319 Number of I/O Submission Queues: 64 00:10:04.319 Number of I/O Completion Queues: 64 00:10:04.319 00:10:04.319 ZNS Specific Controller Data 00:10:04.319 ============================ 00:10:04.319 Zone Append Size Limit: 0 00:10:04.319 00:10:04.319 00:10:04.319 Active Namespaces 00:10:04.319 ================= 00:10:04.319 Namespace ID:1 00:10:04.319 Error Recovery Timeout: Unlimited 00:10:04.319 Command Set Identifier: NVM (00h) 00:10:04.319 Deallocate: Supported 00:10:04.319 Deallocated/Unwritten Error: Supported 00:10:04.319 Deallocated Read Value: All 0x00 00:10:04.319 Deallocate in Write Zeroes: Not Supported 00:10:04.319 Deallocated Guard Field: 0xFFFF 00:10:04.319 Flush: Supported 00:10:04.319 Reservation: Not Supported 00:10:04.319 Metadata Transferred as: Separate Metadata Buffer 00:10:04.319 Namespace Sharing Capabilities: Private 00:10:04.319 Size (in LBAs): 1548666 (5GiB) 00:10:04.319 Capacity (in LBAs): 1548666 (5GiB) 00:10:04.319 Utilization (in LBAs): 1548666 (5GiB) 00:10:04.319 Thin Provisioning: Not Supported 00:10:04.319 Per-NS Atomic Units: No 00:10:04.319 Maximum Single Source Range Length: 128 00:10:04.319 Maximum Copy Length: 128 00:10:04.319 Maximum Source Range Count: 128 00:10:04.319 NGUID/EUI64 Never Reused: No 00:10:04.319 Namespace Write Protected: No 00:10:04.319 Number of LBA Formats: 8 00:10:04.319 Current LBA Format: LBA Format #07 00:10:04.319 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.319 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.319 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.319 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.319 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.319 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.319 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.319 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.319 00:10:04.319 NVM Specific Namespace Data 00:10:04.319 =========================== 00:10:04.319 Logical Block Storage Tag Mask: 0 00:10:04.319 Protection Information Capabilities: 00:10:04.319 16b Guard Protection Information Storage Tag Support: No 00:10:04.319 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.319 Storage Tag Check Read Support: No 00:10:04.319 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.319 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.319 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.319 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.319 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.319 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.319 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.319 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.319 ===================================================== 00:10:04.319 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:04.319 ===================================================== 00:10:04.319 Controller Capabilities/Features 00:10:04.319 ================================ 00:10:04.319 Vendor ID: 1b36 00:10:04.319 Subsystem Vendor ID: 1af4 00:10:04.319 Serial Number: 12341 00:10:04.319 Model Number: QEMU NVMe Ctrl 00:10:04.319 Firmware Version: 8.0.0 00:10:04.319 Recommended Arb Burst: 6 00:10:04.319 IEEE OUI Identifier: 00 54 52 00:10:04.319 Multi-path I/O 00:10:04.319 May have multiple subsystem ports: No 00:10:04.319 May have multiple controllers: No 00:10:04.319 Associated with SR-IOV VF: No 00:10:04.319 Max Data Transfer Size: 524288 00:10:04.319 Max Number of Namespaces: 256 00:10:04.319 Max Number of I/O Queues: 64 00:10:04.319 NVMe Specification Version (VS): 1.4 00:10:04.319 NVMe Specification Version (Identify): 1.4 00:10:04.319 Maximum Queue Entries: 2048 00:10:04.319 Contiguous Queues Required: Yes 00:10:04.319 Arbitration Mechanisms Supported 00:10:04.319 Weighted Round Robin: Not Supported 00:10:04.319 Vendor Specific: Not Supported 00:10:04.319 Reset Timeout: 7500 ms 00:10:04.319 Doorbell Stride: 4 bytes 00:10:04.319 NVM Subsystem Reset: Not Supported 00:10:04.319 Command Sets Supported 00:10:04.319 NVM Command Set: Supported 00:10:04.319 Boot Partition: Not Supported 00:10:04.319 Memory Page Size Minimum: 4096 bytes 00:10:04.319 Memory Page Size Maximum: 65536 bytes 00:10:04.319 Persistent Memory Region: Not Supported 00:10:04.319 Optional Asynchronous Events Supported 00:10:04.319 Namespace Attribute Notices: Supported 00:10:04.319 Firmware Activation Notices: Not Supported 00:10:04.319 ANA Change Notices: Not Supported 00:10:04.319 PLE Aggregate Log Change Notices: Not Supported 00:10:04.319 LBA Status Info Alert Notices: Not Supported 00:10:04.319 EGE Aggregate Log Change Notices: Not Supported 00:10:04.319 Normal NVM Subsystem Shutdown event: Not Supported 00:10:04.319 Zone Descriptor Change Notices: Not Supported 00:10:04.319 Discovery Log Change Notices: Not Supported 00:10:04.319 Controller Attributes 00:10:04.319 128-bit Host Identifier: Not Supported 00:10:04.319 Non-Operational Permissive Mode: Not Supported 00:10:04.319 NVM Sets: Not Supported 00:10:04.319 Read Recovery Levels: Not Supported 00:10:04.319 Endurance Groups: Not Supported 00:10:04.319 Predictable Latency Mode: Not Supported 00:10:04.319 Traffic Based Keep ALive: Not Supported 00:10:04.319 Namespace Granularity: Not Supported 00:10:04.319 SQ Associations: Not Supported 00:10:04.319 UUID List: Not Supported 00:10:04.319 Multi-Domain Subsystem: Not Supported 00:10:04.319 Fixed Capacity Management: Not Supported 00:10:04.319 Variable Capacity Management: Not Supported 00:10:04.319 Delete Endurance Group: Not Supported 00:10:04.319 Delete NVM Set: Not Supported 00:10:04.319 Extended LBA Formats Supported: Supported 00:10:04.319 Flexible Data Placement Supported: Not Supported 00:10:04.319 00:10:04.319 Controller Memory Buffer Support 00:10:04.319 ================================ 00:10:04.319 Supported: No 00:10:04.319 00:10:04.319 Persistent Memory Region Support 00:10:04.319 ================================ 00:10:04.319 Supported: No 00:10:04.319 00:10:04.319 Admin Command Set Attributes 00:10:04.319 ============================ 00:10:04.319 Security Send/Receive: Not Supported 00:10:04.319 Format NVM: Supported 00:10:04.319 Firmware Activate/Download: Not Supported 00:10:04.319 Namespace Management: Supported 00:10:04.319 Device Self-Test: Not Supported 00:10:04.319 Directives: Supported 00:10:04.319 NVMe-MI: Not Supported 00:10:04.319 Virtualization Management: Not Supported 00:10:04.319 Doorbell Buffer Config: Supported 00:10:04.319 Get LBA Status Capability: Not Supported 00:10:04.319 Command & Feature Lockdown Capability: Not Supported 00:10:04.319 Abort Command Limit: 4 00:10:04.319 Async Event Request Limit: 4 00:10:04.319 Number of Firmware Slots: N/A 00:10:04.319 Firmware Slot 1 Read-Only: N/A 00:10:04.319 Firmware Activation Without Reset: N/A 00:10:04.319 Multiple Update Detection Support: N/A 00:10:04.319 Firmware Update Granularity: No Information Provided 00:10:04.319 Per-Namespace SMART Log: Yes 00:10:04.319 Asymmetric Namespace Access Log Page: Not Supported 00:10:04.319 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:04.319 Command Effects Log Page: Supported 00:10:04.319 Get Log Page Extended Data: Supported 00:10:04.319 Telemetry Log Pages: Not Supported 00:10:04.319 Persistent Event Log Pages: Not Supported 00:10:04.319 Supported Log Pages Log Page: May Support 00:10:04.319 Commands Supported & Effects Log Page: Not Supported 00:10:04.319 Feature Identifiers & Effects Log Page:May Support 00:10:04.319 NVMe-MI Commands & Effects Log Page: May Support 00:10:04.319 Data Area 4 for Telemetry Log: Not Supported 00:10:04.319 Error Log Page Entries Supported: 1 00:10:04.319 Keep Alive: Not Supported 00:10:04.319 00:10:04.319 NVM Command Set Attributes 00:10:04.319 ========================== 00:10:04.319 Submission Queue Entry Size 00:10:04.319 Max: 64 00:10:04.319 Min: 64 00:10:04.319 Completion Queue Entry Size 00:10:04.319 Max: 16 00:10:04.319 Min: 16 00:10:04.319 Number of Namespaces: 256 00:10:04.319 Compare Command: Supported 00:10:04.319 Write Uncorrectable Command: Not Supported 00:10:04.319 Dataset Management Command: Supported 00:10:04.319 Write Zeroes Command: Supported 00:10:04.319 Set Features Save Field: Supported 00:10:04.319 Reservations: Not Supported 00:10:04.319 Timestamp: Supported 00:10:04.319 Copy: Supported 00:10:04.319 Volatile Write Cache: Present 00:10:04.319 Atomic Write Unit (Normal): 1 00:10:04.319 Atomic Write Unit (PFail): 1 00:10:04.319 Atomic Compare & Write Unit: 1 00:10:04.319 Fused Compare & Write: Not Supported 00:10:04.319 Scatter-Gather List 00:10:04.319 SGL Command Set: Supported 00:10:04.320 SGL Keyed: Not Supported 00:10:04.320 SGL Bit Bucket Descriptor: Not Supported 00:10:04.320 SGL Metadata Pointer: Not Supported 00:10:04.320 Oversized SGL: Not Supported 00:10:04.320 SGL Metadata Address: Not Supported 00:10:04.320 SGL Offset: Not Supported 00:10:04.320 Transport SGL Data Block: Not Supported 00:10:04.320 Replay Protected Memory Block: Not Supported 00:10:04.320 00:10:04.320 Firmware Slot Information 00:10:04.320 ========================= 00:10:04.320 Active slot: 1 00:10:04.320 Slot 1 Firmware Revision: 1.0 00:10:04.320 00:10:04.320 00:10:04.320 Commands Supported and Effects 00:10:04.320 ============================== 00:10:04.320 Admin Commands 00:10:04.320 -------------- 00:10:04.320 Delete I/O Submission Queue (00h): Supported 00:10:04.320 Create I/O Submission Queue (01h): Supported 00:10:04.320 Get Log Page (02h): Supported 00:10:04.320 Delete I/O Completion Queue (04h): Supported 00:10:04.320 Create I/O Completion Queue (05h): Supported 00:10:04.320 Identify (06h): Supported 00:10:04.320 Abort (08h): Supported 00:10:04.320 Set Features (09h): Supported 00:10:04.320 Get Features (0Ah): Supported 00:10:04.320 Asynchronous Event Request (0Ch): Supported 00:10:04.320 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:04.320 Directive Send (19h): Supported 00:10:04.320 Directive Receive (1Ah): Supported 00:10:04.320 Virtualization Management (1Ch): Supported 00:10:04.320 Doorbell Buffer Config (7Ch): Supported 00:10:04.320 Format NVM (80h): Supported LBA-Change 00:10:04.320 I/O Commands 00:10:04.320 ------------ 00:10:04.320 Flush (00h): Supported LBA-Change 00:10:04.320 Write (01h): Supported LBA-Change 00:10:04.320 Read (02h): Supported 00:10:04.320 Compare (05h): Supported 00:10:04.320 Write Zeroes (08h): Supported LBA-Change 00:10:04.320 Dataset Management (09h): Supported LBA-Change 00:10:04.320 Unknown (0Ch): Supported 00:10:04.320 Unknown (12h): Supported 00:10:04.320 Copy (19h): Supported LBA-Change 00:10:04.320 Unknown (1Dh): Supported LBA-Change 00:10:04.320 00:10:04.320 Error Log 00:10:04.320 ========= 00:10:04.320 00:10:04.320 Arbitration 00:10:04.320 =========== 00:10:04.320 Arbitration Burst: no limit 00:10:04.320 00:10:04.320 Power Management 00:10:04.320 ================ 00:10:04.320 Number of Power States: 1 00:10:04.320 Current Power State: Power State #0 00:10:04.320 Power State #0: 00:10:04.320 Max Power: 25.00 W 00:10:04.320 Non-Operational State: Operational 00:10:04.320 Entry Latency: 16 microseconds 00:10:04.320 Exit Latency: 4 microseconds 00:10:04.320 Relative Read Throughput: 0 00:10:04.320 Relative Read Latency: 0 00:10:04.320 Relative Write Throughput: 0 00:10:04.320 Relative Write Latency: 0 00:10:04.320 Idle Power: Not Reported 00:10:04.320 Active Power: Not Reported 00:10:04.320 Non-Operational Permissive Mode: Not Supported 00:10:04.320 00:10:04.320 Health Information 00:10:04.320 ================== 00:10:04.320 Critical Warnings: 00:10:04.320 Available Spare Space: OK 00:10:04.320 Temperature: [2024-11-05 17:59:33.474896] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 63956 terminated unexpected 00:10:04.320 OK 00:10:04.320 Device Reliability: OK 00:10:04.320 Read Only: No 00:10:04.320 Volatile Memory Backup: OK 00:10:04.320 Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.320 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:04.320 Available Spare: 0% 00:10:04.320 Available Spare Threshold: 0% 00:10:04.320 Life Percentage Used: 0% 00:10:04.320 Data Units Read: 1187 00:10:04.320 Data Units Written: 1054 00:10:04.320 Host Read Commands: 56057 00:10:04.320 Host Write Commands: 54862 00:10:04.320 Controller Busy Time: 0 minutes 00:10:04.320 Power Cycles: 0 00:10:04.320 Power On Hours: 0 hours 00:10:04.320 Unsafe Shutdowns: 0 00:10:04.320 Unrecoverable Media Errors: 0 00:10:04.320 Lifetime Error Log Entries: 0 00:10:04.320 Warning Temperature Time: 0 minutes 00:10:04.320 Critical Temperature Time: 0 minutes 00:10:04.320 00:10:04.320 Number of Queues 00:10:04.320 ================ 00:10:04.320 Number of I/O Submission Queues: 64 00:10:04.320 Number of I/O Completion Queues: 64 00:10:04.320 00:10:04.320 ZNS Specific Controller Data 00:10:04.320 ============================ 00:10:04.320 Zone Append Size Limit: 0 00:10:04.320 00:10:04.320 00:10:04.320 Active Namespaces 00:10:04.320 ================= 00:10:04.320 Namespace ID:1 00:10:04.320 Error Recovery Timeout: Unlimited 00:10:04.320 Command Set Identifier: NVM (00h) 00:10:04.320 Deallocate: Supported 00:10:04.320 Deallocated/Unwritten Error: Supported 00:10:04.320 Deallocated Read Value: All 0x00 00:10:04.320 Deallocate in Write Zeroes: Not Supported 00:10:04.320 Deallocated Guard Field: 0xFFFF 00:10:04.320 Flush: Supported 00:10:04.320 Reservation: Not Supported 00:10:04.320 Namespace Sharing Capabilities: Private 00:10:04.320 Size (in LBAs): 1310720 (5GiB) 00:10:04.320 Capacity (in LBAs): 1310720 (5GiB) 00:10:04.320 Utilization (in LBAs): 1310720 (5GiB) 00:10:04.320 Thin Provisioning: Not Supported 00:10:04.320 Per-NS Atomic Units: No 00:10:04.320 Maximum Single Source Range Length: 128 00:10:04.320 Maximum Copy Length: 128 00:10:04.320 Maximum Source Range Count: 128 00:10:04.320 NGUID/EUI64 Never Reused: No 00:10:04.320 Namespace Write Protected: No 00:10:04.320 Number of LBA Formats: 8 00:10:04.320 Current LBA Format: LBA Format #04 00:10:04.320 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.320 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.320 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.320 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.320 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.320 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.320 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.320 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.320 00:10:04.320 NVM Specific Namespace Data 00:10:04.320 =========================== 00:10:04.320 Logical Block Storage Tag Mask: 0 00:10:04.320 Protection Information Capabilities: 00:10:04.320 16b Guard Protection Information Storage Tag Support: No 00:10:04.320 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.320 Storage Tag Check Read Support: No 00:10:04.320 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.320 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.320 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.320 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.320 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.320 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.320 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.320 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.320 ===================================================== 00:10:04.320 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:04.320 ===================================================== 00:10:04.320 Controller Capabilities/Features 00:10:04.320 ================================ 00:10:04.320 Vendor ID: 1b36 00:10:04.320 Subsystem Vendor ID: 1af4 00:10:04.320 Serial Number: 12343 00:10:04.320 Model Number: QEMU NVMe Ctrl 00:10:04.320 Firmware Version: 8.0.0 00:10:04.320 Recommended Arb Burst: 6 00:10:04.320 IEEE OUI Identifier: 00 54 52 00:10:04.320 Multi-path I/O 00:10:04.320 May have multiple subsystem ports: No 00:10:04.320 May have multiple controllers: Yes 00:10:04.320 Associated with SR-IOV VF: No 00:10:04.320 Max Data Transfer Size: 524288 00:10:04.320 Max Number of Namespaces: 256 00:10:04.320 Max Number of I/O Queues: 64 00:10:04.320 NVMe Specification Version (VS): 1.4 00:10:04.320 NVMe Specification Version (Identify): 1.4 00:10:04.320 Maximum Queue Entries: 2048 00:10:04.320 Contiguous Queues Required: Yes 00:10:04.320 Arbitration Mechanisms Supported 00:10:04.320 Weighted Round Robin: Not Supported 00:10:04.320 Vendor Specific: Not Supported 00:10:04.320 Reset Timeout: 7500 ms 00:10:04.320 Doorbell Stride: 4 bytes 00:10:04.320 NVM Subsystem Reset: Not Supported 00:10:04.320 Command Sets Supported 00:10:04.320 NVM Command Set: Supported 00:10:04.320 Boot Partition: Not Supported 00:10:04.320 Memory Page Size Minimum: 4096 bytes 00:10:04.320 Memory Page Size Maximum: 65536 bytes 00:10:04.320 Persistent Memory Region: Not Supported 00:10:04.320 Optional Asynchronous Events Supported 00:10:04.320 Namespace Attribute Notices: Supported 00:10:04.320 Firmware Activation Notices: Not Supported 00:10:04.320 ANA Change Notices: Not Supported 00:10:04.320 PLE Aggregate Log Change Notices: Not Supported 00:10:04.320 LBA Status Info Alert Notices: Not Supported 00:10:04.320 EGE Aggregate Log Change Notices: Not Supported 00:10:04.320 Normal NVM Subsystem Shutdown event: Not Supported 00:10:04.320 Zone Descriptor Change Notices: Not Supported 00:10:04.320 Discovery Log Change Notices: Not Supported 00:10:04.321 Controller Attributes 00:10:04.321 128-bit Host Identifier: Not Supported 00:10:04.321 Non-Operational Permissive Mode: Not Supported 00:10:04.321 NVM Sets: Not Supported 00:10:04.321 Read Recovery Levels: Not Supported 00:10:04.321 Endurance Groups: Supported 00:10:04.321 Predictable Latency Mode: Not Supported 00:10:04.321 Traffic Based Keep ALive: Not Supported 00:10:04.321 Namespace Granularity: Not Supported 00:10:04.321 SQ Associations: Not Supported 00:10:04.321 UUID List: Not Supported 00:10:04.321 Multi-Domain Subsystem: Not Supported 00:10:04.321 Fixed Capacity Management: Not Supported 00:10:04.321 Variable Capacity Management: Not Supported 00:10:04.321 Delete Endurance Group: Not Supported 00:10:04.321 Delete NVM Set: Not Supported 00:10:04.321 Extended LBA Formats Supported: Supported 00:10:04.321 Flexible Data Placement Supported: Supported 00:10:04.321 00:10:04.321 Controller Memory Buffer Support 00:10:04.321 ================================ 00:10:04.321 Supported: No 00:10:04.321 00:10:04.321 Persistent Memory Region Support 00:10:04.321 ================================ 00:10:04.321 Supported: No 00:10:04.321 00:10:04.321 Admin Command Set Attributes 00:10:04.321 ============================ 00:10:04.321 Security Send/Receive: Not Supported 00:10:04.321 Format NVM: Supported 00:10:04.321 Firmware Activate/Download: Not Supported 00:10:04.321 Namespace Management: Supported 00:10:04.321 Device Self-Test: Not Supported 00:10:04.321 Directives: Supported 00:10:04.321 NVMe-MI: Not Supported 00:10:04.321 Virtualization Management: Not Supported 00:10:04.321 Doorbell Buffer Config: Supported 00:10:04.321 Get LBA Status Capability: Not Supported 00:10:04.321 Command & Feature Lockdown Capability: Not Supported 00:10:04.321 Abort Command Limit: 4 00:10:04.321 Async Event Request Limit: 4 00:10:04.321 Number of Firmware Slots: N/A 00:10:04.321 Firmware Slot 1 Read-Only: N/A 00:10:04.321 Firmware Activation Without Reset: N/A 00:10:04.321 Multiple Update Detection Support: N/A 00:10:04.321 Firmware Update Granularity: No Information Provided 00:10:04.321 Per-Namespace SMART Log: Yes 00:10:04.321 Asymmetric Namespace Access Log Page: Not Supported 00:10:04.321 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:04.321 Command Effects Log Page: Supported 00:10:04.321 Get Log Page Extended Data: Supported 00:10:04.321 Telemetry Log Pages: Not Supported 00:10:04.321 Persistent Event Log Pages: Not Supported 00:10:04.321 Supported Log Pages Log Page: May Support 00:10:04.321 Commands Supported & Effects Log Page: Not Supported 00:10:04.321 Feature Identifiers & Effects Log Page:May Support 00:10:04.321 NVMe-MI Commands & Effects Log Page: May Support 00:10:04.321 Data Area 4 for Telemetry Log: Not Supported 00:10:04.321 Error Log Page Entries Supported: 1 00:10:04.321 Keep Alive: Not Supported 00:10:04.321 00:10:04.321 NVM Command Set Attributes 00:10:04.321 ========================== 00:10:04.321 Submission Queue Entry Size 00:10:04.321 Max: 64 00:10:04.321 Min: 64 00:10:04.321 Completion Queue Entry Size 00:10:04.321 Max: 16 00:10:04.321 Min: 16 00:10:04.321 Number of Namespaces: 256 00:10:04.321 Compare Command: Supported 00:10:04.321 Write Uncorrectable Command: Not Supported 00:10:04.321 Dataset Management Command: Supported 00:10:04.321 Write Zeroes Command: Supported 00:10:04.321 Set Features Save Field: Supported 00:10:04.321 Reservations: Not Supported 00:10:04.321 Timestamp: Supported 00:10:04.321 Copy: Supported 00:10:04.321 Volatile Write Cache: Present 00:10:04.321 Atomic Write Unit (Normal): 1 00:10:04.321 Atomic Write Unit (PFail): 1 00:10:04.321 Atomic Compare & Write Unit: 1 00:10:04.321 Fused Compare & Write: Not Supported 00:10:04.321 Scatter-Gather List 00:10:04.321 SGL Command Set: Supported 00:10:04.321 SGL Keyed: Not Supported 00:10:04.321 SGL Bit Bucket Descriptor: Not Supported 00:10:04.321 SGL Metadata Pointer: Not Supported 00:10:04.321 Oversized SGL: Not Supported 00:10:04.321 SGL Metadata Address: Not Supported 00:10:04.321 SGL Offset: Not Supported 00:10:04.321 Transport SGL Data Block: Not Supported 00:10:04.321 Replay Protected Memory Block: Not Supported 00:10:04.321 00:10:04.321 Firmware Slot Information 00:10:04.321 ========================= 00:10:04.321 Active slot: 1 00:10:04.321 Slot 1 Firmware Revision: 1.0 00:10:04.321 00:10:04.321 00:10:04.321 Commands Supported and Effects 00:10:04.321 ============================== 00:10:04.321 Admin Commands 00:10:04.321 -------------- 00:10:04.321 Delete I/O Submission Queue (00h): Supported 00:10:04.321 Create I/O Submission Queue (01h): Supported 00:10:04.321 Get Log Page (02h): Supported 00:10:04.321 Delete I/O Completion Queue (04h): Supported 00:10:04.321 Create I/O Completion Queue (05h): Supported 00:10:04.321 Identify (06h): Supported 00:10:04.321 Abort (08h): Supported 00:10:04.321 Set Features (09h): Supported 00:10:04.321 Get Features (0Ah): Supported 00:10:04.321 Asynchronous Event Request (0Ch): Supported 00:10:04.321 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:04.321 Directive Send (19h): Supported 00:10:04.321 Directive Receive (1Ah): Supported 00:10:04.321 Virtualization Management (1Ch): Supported 00:10:04.321 Doorbell Buffer Config (7Ch): Supported 00:10:04.321 Format NVM (80h): Supported LBA-Change 00:10:04.321 I/O Commands 00:10:04.321 ------------ 00:10:04.321 Flush (00h): Supported LBA-Change 00:10:04.321 Write (01h): Supported LBA-Change 00:10:04.321 Read (02h): Supported 00:10:04.321 Compare (05h): Supported 00:10:04.321 Write Zeroes (08h): Supported LBA-Change 00:10:04.321 Dataset Management (09h): Supported LBA-Change 00:10:04.321 Unknown (0Ch): Supported 00:10:04.321 Unknown (12h): Supported 00:10:04.321 Copy (19h): Supported LBA-Change 00:10:04.321 Unknown (1Dh): Supported LBA-Change 00:10:04.321 00:10:04.321 Error Log 00:10:04.321 ========= 00:10:04.321 00:10:04.321 Arbitration 00:10:04.321 =========== 00:10:04.321 Arbitration Burst: no limit 00:10:04.321 00:10:04.321 Power Management 00:10:04.321 ================ 00:10:04.321 Number of Power States: 1 00:10:04.321 Current Power State: Power State #0 00:10:04.321 Power State #0: 00:10:04.321 Max Power: 25.00 W 00:10:04.321 Non-Operational State: Operational 00:10:04.321 Entry Latency: 16 microseconds 00:10:04.321 Exit Latency: 4 microseconds 00:10:04.321 Relative Read Throughput: 0 00:10:04.321 Relative Read Latency: 0 00:10:04.321 Relative Write Throughput: 0 00:10:04.321 Relative Write Latency: 0 00:10:04.321 Idle Power: Not Reported 00:10:04.321 Active Power: Not Reported 00:10:04.321 Non-Operational Permissive Mode: Not Supported 00:10:04.321 00:10:04.321 Health Information 00:10:04.321 ================== 00:10:04.321 Critical Warnings: 00:10:04.321 Available Spare Space: OK 00:10:04.321 Temperature: OK 00:10:04.321 Device Reliability: OK 00:10:04.321 Read Only: No 00:10:04.321 Volatile Memory Backup: OK 00:10:04.321 Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.321 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:04.321 Available Spare: 0% 00:10:04.321 Available Spare Threshold: 0% 00:10:04.321 Life Percentage Used: 0% 00:10:04.321 Data Units Read: 902 00:10:04.321 Data Units Written: 831 00:10:04.321 Host Read Commands: 39342 00:10:04.321 Host Write Commands: 38765 00:10:04.321 Controller Busy Time: 0 minutes 00:10:04.321 Power Cycles: 0 00:10:04.321 Power On Hours: 0 hours 00:10:04.321 Unsafe Shutdowns: 0 00:10:04.321 Unrecoverable Media Errors: 0 00:10:04.321 Lifetime Error Log Entries: 0 00:10:04.321 Warning Temperature Time: 0 minutes 00:10:04.321 Critical Temperature Time: 0 minutes 00:10:04.321 00:10:04.321 Number of Queues 00:10:04.321 ================ 00:10:04.321 Number of I/O Submission Queues: 64 00:10:04.321 Number of I/O Completion Queues: 64 00:10:04.321 00:10:04.321 ZNS Specific Controller Data 00:10:04.321 ============================ 00:10:04.321 Zone Append Size Limit: 0 00:10:04.321 00:10:04.321 00:10:04.321 Active Namespaces 00:10:04.321 ================= 00:10:04.321 Namespace ID:1 00:10:04.321 Error Recovery Timeout: Unlimited 00:10:04.321 Command Set Identifier: NVM (00h) 00:10:04.321 Deallocate: Supported 00:10:04.321 Deallocated/Unwritten Error: Supported 00:10:04.321 Deallocated Read Value: All 0x00 00:10:04.321 Deallocate in Write Zeroes: Not Supported 00:10:04.321 Deallocated Guard Field: 0xFFFF 00:10:04.321 Flush: Supported 00:10:04.321 Reservation: Not Supported 00:10:04.321 Namespace Sharing Capabilities: Multiple Controllers 00:10:04.321 Size (in LBAs): 262144 (1GiB) 00:10:04.321 Capacity (in LBAs): 262144 (1GiB) 00:10:04.321 Utilization (in LBAs): 262144 (1GiB) 00:10:04.321 Thin Provisioning: Not Supported 00:10:04.321 Per-NS Atomic Units: No 00:10:04.321 Maximum Single Source Range Length: 128 00:10:04.321 Maximum Copy Length: 128 00:10:04.322 Maximum Source Range Count: 128 00:10:04.322 NGUID/EUI64 Never Reused: No 00:10:04.322 Namespace Write Protected: No 00:10:04.322 Endurance group ID: 1 00:10:04.322 Number of LBA Formats: 8 00:10:04.322 Current LBA Format: LBA Format #04 00:10:04.322 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.322 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.322 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.322 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.322 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.322 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.322 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.322 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.322 00:10:04.322 Get Feature FDP: 00:10:04.322 ================ 00:10:04.322 Enabled: Yes 00:10:04.322 FDP configuration index: 0 00:10:04.322 00:10:04.322 FDP configurations log page 00:10:04.322 =========================== 00:10:04.322 Number of FDP configurations: 1 00:10:04.322 Version: 0 00:10:04.322 Size: 112 00:10:04.322 FDP Configuration Descriptor: 0 00:10:04.322 Descriptor Size: 96 00:10:04.322 Reclaim Group Identifier format: 2 00:10:04.322 FDP Volatile Write Cache: Not Present 00:10:04.322 FDP Configuration: Valid 00:10:04.322 Vendor Specific Size: 0 00:10:04.322 Number of Reclaim Groups: 2 00:10:04.322 Number of Recalim Unit Handles: 8 00:10:04.322 Max Placement Identifiers: 128 00:10:04.322 Number of Namespaces Suppprted: 256 00:10:04.322 Reclaim unit Nominal Size: 6000000 bytes 00:10:04.322 Estimated Reclaim Unit Time Limit: Not Reported 00:10:04.322 RUH Desc #000: RUH Type: Initially Isolated 00:10:04.322 RUH Desc #001: RUH Type: Initially Isolated 00:10:04.322 RUH Desc #002: RUH Type: Initially Isolated 00:10:04.322 RUH Desc #003: RUH Type: Initially Isolated 00:10:04.322 RUH Desc #004: RUH Type: Initially Isolated 00:10:04.322 RUH Desc #005: RUH Type: Initially Isolated 00:10:04.322 RUH Desc #006: RUH Type: Initially Isolated 00:10:04.322 RUH Desc #007: RUH Type: Initially Isolated 00:10:04.322 00:10:04.322 FDP reclaim unit handle usage log page 00:10:04.322 ====================================== 00:10:04.322 Number of Reclaim Unit Handles: 8 00:10:04.322 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:04.322 RUH Usage Desc #001: RUH Attributes: Unused 00:10:04.322 RUH Usage Desc #002: RUH Attributes: Unused 00:10:04.322 RUH Usage Desc #003: RUH Attributes: Unused 00:10:04.322 RUH Usage Desc #004: RUH Attributes: Unused 00:10:04.322 RUH Usage Desc #005: RUH Attributes: Unused 00:10:04.322 RUH Usage Desc #006: RUH Attributes: Unused 00:10:04.322 RUH Usage Desc #007: RUH Attributes: Unused 00:10:04.322 00:10:04.322 FDP statistics log page 00:10:04.322 ======================= 00:10:04.322 Host bytes with metadata written: 536518656 00:10:04.322 Med[2024-11-05 17:59:33.476663] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 63956 terminated unexpected 00:10:04.322 ia bytes with metadata written: 536576000 00:10:04.322 Media bytes erased: 0 00:10:04.322 00:10:04.322 FDP events log page 00:10:04.322 =================== 00:10:04.322 Number of FDP events: 0 00:10:04.322 00:10:04.322 NVM Specific Namespace Data 00:10:04.322 =========================== 00:10:04.322 Logical Block Storage Tag Mask: 0 00:10:04.322 Protection Information Capabilities: 00:10:04.322 16b Guard Protection Information Storage Tag Support: No 00:10:04.322 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.322 Storage Tag Check Read Support: No 00:10:04.322 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.322 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.322 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.322 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.322 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.322 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.322 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.322 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.322 ===================================================== 00:10:04.322 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:04.322 ===================================================== 00:10:04.322 Controller Capabilities/Features 00:10:04.322 ================================ 00:10:04.322 Vendor ID: 1b36 00:10:04.322 Subsystem Vendor ID: 1af4 00:10:04.322 Serial Number: 12342 00:10:04.322 Model Number: QEMU NVMe Ctrl 00:10:04.322 Firmware Version: 8.0.0 00:10:04.322 Recommended Arb Burst: 6 00:10:04.322 IEEE OUI Identifier: 00 54 52 00:10:04.322 Multi-path I/O 00:10:04.322 May have multiple subsystem ports: No 00:10:04.322 May have multiple controllers: No 00:10:04.322 Associated with SR-IOV VF: No 00:10:04.322 Max Data Transfer Size: 524288 00:10:04.322 Max Number of Namespaces: 256 00:10:04.322 Max Number of I/O Queues: 64 00:10:04.322 NVMe Specification Version (VS): 1.4 00:10:04.322 NVMe Specification Version (Identify): 1.4 00:10:04.322 Maximum Queue Entries: 2048 00:10:04.322 Contiguous Queues Required: Yes 00:10:04.322 Arbitration Mechanisms Supported 00:10:04.322 Weighted Round Robin: Not Supported 00:10:04.322 Vendor Specific: Not Supported 00:10:04.322 Reset Timeout: 7500 ms 00:10:04.322 Doorbell Stride: 4 bytes 00:10:04.322 NVM Subsystem Reset: Not Supported 00:10:04.322 Command Sets Supported 00:10:04.322 NVM Command Set: Supported 00:10:04.322 Boot Partition: Not Supported 00:10:04.322 Memory Page Size Minimum: 4096 bytes 00:10:04.322 Memory Page Size Maximum: 65536 bytes 00:10:04.322 Persistent Memory Region: Not Supported 00:10:04.322 Optional Asynchronous Events Supported 00:10:04.322 Namespace Attribute Notices: Supported 00:10:04.322 Firmware Activation Notices: Not Supported 00:10:04.322 ANA Change Notices: Not Supported 00:10:04.322 PLE Aggregate Log Change Notices: Not Supported 00:10:04.322 LBA Status Info Alert Notices: Not Supported 00:10:04.322 EGE Aggregate Log Change Notices: Not Supported 00:10:04.322 Normal NVM Subsystem Shutdown event: Not Supported 00:10:04.322 Zone Descriptor Change Notices: Not Supported 00:10:04.322 Discovery Log Change Notices: Not Supported 00:10:04.322 Controller Attributes 00:10:04.322 128-bit Host Identifier: Not Supported 00:10:04.322 Non-Operational Permissive Mode: Not Supported 00:10:04.322 NVM Sets: Not Supported 00:10:04.322 Read Recovery Levels: Not Supported 00:10:04.322 Endurance Groups: Not Supported 00:10:04.322 Predictable Latency Mode: Not Supported 00:10:04.322 Traffic Based Keep ALive: Not Supported 00:10:04.322 Namespace Granularity: Not Supported 00:10:04.322 SQ Associations: Not Supported 00:10:04.322 UUID List: Not Supported 00:10:04.322 Multi-Domain Subsystem: Not Supported 00:10:04.322 Fixed Capacity Management: Not Supported 00:10:04.322 Variable Capacity Management: Not Supported 00:10:04.322 Delete Endurance Group: Not Supported 00:10:04.322 Delete NVM Set: Not Supported 00:10:04.322 Extended LBA Formats Supported: Supported 00:10:04.322 Flexible Data Placement Supported: Not Supported 00:10:04.322 00:10:04.322 Controller Memory Buffer Support 00:10:04.322 ================================ 00:10:04.322 Supported: No 00:10:04.322 00:10:04.322 Persistent Memory Region Support 00:10:04.322 ================================ 00:10:04.322 Supported: No 00:10:04.322 00:10:04.322 Admin Command Set Attributes 00:10:04.323 ============================ 00:10:04.323 Security Send/Receive: Not Supported 00:10:04.323 Format NVM: Supported 00:10:04.323 Firmware Activate/Download: Not Supported 00:10:04.323 Namespace Management: Supported 00:10:04.323 Device Self-Test: Not Supported 00:10:04.323 Directives: Supported 00:10:04.323 NVMe-MI: Not Supported 00:10:04.323 Virtualization Management: Not Supported 00:10:04.323 Doorbell Buffer Config: Supported 00:10:04.323 Get LBA Status Capability: Not Supported 00:10:04.323 Command & Feature Lockdown Capability: Not Supported 00:10:04.323 Abort Command Limit: 4 00:10:04.323 Async Event Request Limit: 4 00:10:04.323 Number of Firmware Slots: N/A 00:10:04.323 Firmware Slot 1 Read-Only: N/A 00:10:04.323 Firmware Activation Without Reset: N/A 00:10:04.323 Multiple Update Detection Support: N/A 00:10:04.323 Firmware Update Granularity: No Information Provided 00:10:04.323 Per-Namespace SMART Log: Yes 00:10:04.323 Asymmetric Namespace Access Log Page: Not Supported 00:10:04.323 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:04.323 Command Effects Log Page: Supported 00:10:04.323 Get Log Page Extended Data: Supported 00:10:04.323 Telemetry Log Pages: Not Supported 00:10:04.323 Persistent Event Log Pages: Not Supported 00:10:04.323 Supported Log Pages Log Page: May Support 00:10:04.323 Commands Supported & Effects Log Page: Not Supported 00:10:04.323 Feature Identifiers & Effects Log Page:May Support 00:10:04.323 NVMe-MI Commands & Effects Log Page: May Support 00:10:04.323 Data Area 4 for Telemetry Log: Not Supported 00:10:04.323 Error Log Page Entries Supported: 1 00:10:04.323 Keep Alive: Not Supported 00:10:04.323 00:10:04.323 NVM Command Set Attributes 00:10:04.323 ========================== 00:10:04.323 Submission Queue Entry Size 00:10:04.323 Max: 64 00:10:04.323 Min: 64 00:10:04.323 Completion Queue Entry Size 00:10:04.323 Max: 16 00:10:04.323 Min: 16 00:10:04.323 Number of Namespaces: 256 00:10:04.323 Compare Command: Supported 00:10:04.323 Write Uncorrectable Command: Not Supported 00:10:04.323 Dataset Management Command: Supported 00:10:04.323 Write Zeroes Command: Supported 00:10:04.323 Set Features Save Field: Supported 00:10:04.323 Reservations: Not Supported 00:10:04.323 Timestamp: Supported 00:10:04.323 Copy: Supported 00:10:04.323 Volatile Write Cache: Present 00:10:04.323 Atomic Write Unit (Normal): 1 00:10:04.323 Atomic Write Unit (PFail): 1 00:10:04.323 Atomic Compare & Write Unit: 1 00:10:04.323 Fused Compare & Write: Not Supported 00:10:04.323 Scatter-Gather List 00:10:04.323 SGL Command Set: Supported 00:10:04.323 SGL Keyed: Not Supported 00:10:04.323 SGL Bit Bucket Descriptor: Not Supported 00:10:04.323 SGL Metadata Pointer: Not Supported 00:10:04.323 Oversized SGL: Not Supported 00:10:04.323 SGL Metadata Address: Not Supported 00:10:04.323 SGL Offset: Not Supported 00:10:04.323 Transport SGL Data Block: Not Supported 00:10:04.323 Replay Protected Memory Block: Not Supported 00:10:04.323 00:10:04.323 Firmware Slot Information 00:10:04.323 ========================= 00:10:04.323 Active slot: 1 00:10:04.323 Slot 1 Firmware Revision: 1.0 00:10:04.323 00:10:04.323 00:10:04.323 Commands Supported and Effects 00:10:04.323 ============================== 00:10:04.323 Admin Commands 00:10:04.323 -------------- 00:10:04.323 Delete I/O Submission Queue (00h): Supported 00:10:04.323 Create I/O Submission Queue (01h): Supported 00:10:04.323 Get Log Page (02h): Supported 00:10:04.323 Delete I/O Completion Queue (04h): Supported 00:10:04.323 Create I/O Completion Queue (05h): Supported 00:10:04.323 Identify (06h): Supported 00:10:04.323 Abort (08h): Supported 00:10:04.323 Set Features (09h): Supported 00:10:04.323 Get Features (0Ah): Supported 00:10:04.323 Asynchronous Event Request (0Ch): Supported 00:10:04.323 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:04.323 Directive Send (19h): Supported 00:10:04.323 Directive Receive (1Ah): Supported 00:10:04.323 Virtualization Management (1Ch): Supported 00:10:04.323 Doorbell Buffer Config (7Ch): Supported 00:10:04.323 Format NVM (80h): Supported LBA-Change 00:10:04.323 I/O Commands 00:10:04.323 ------------ 00:10:04.323 Flush (00h): Supported LBA-Change 00:10:04.323 Write (01h): Supported LBA-Change 00:10:04.323 Read (02h): Supported 00:10:04.323 Compare (05h): Supported 00:10:04.323 Write Zeroes (08h): Supported LBA-Change 00:10:04.323 Dataset Management (09h): Supported LBA-Change 00:10:04.323 Unknown (0Ch): Supported 00:10:04.323 Unknown (12h): Supported 00:10:04.323 Copy (19h): Supported LBA-Change 00:10:04.323 Unknown (1Dh): Supported LBA-Change 00:10:04.323 00:10:04.323 Error Log 00:10:04.323 ========= 00:10:04.323 00:10:04.323 Arbitration 00:10:04.323 =========== 00:10:04.323 Arbitration Burst: no limit 00:10:04.323 00:10:04.323 Power Management 00:10:04.323 ================ 00:10:04.323 Number of Power States: 1 00:10:04.323 Current Power State: Power State #0 00:10:04.323 Power State #0: 00:10:04.323 Max Power: 25.00 W 00:10:04.323 Non-Operational State: Operational 00:10:04.323 Entry Latency: 16 microseconds 00:10:04.323 Exit Latency: 4 microseconds 00:10:04.323 Relative Read Throughput: 0 00:10:04.323 Relative Read Latency: 0 00:10:04.323 Relative Write Throughput: 0 00:10:04.323 Relative Write Latency: 0 00:10:04.323 Idle Power: Not Reported 00:10:04.323 Active Power: Not Reported 00:10:04.323 Non-Operational Permissive Mode: Not Supported 00:10:04.323 00:10:04.323 Health Information 00:10:04.323 ================== 00:10:04.323 Critical Warnings: 00:10:04.323 Available Spare Space: OK 00:10:04.323 Temperature: OK 00:10:04.323 Device Reliability: OK 00:10:04.323 Read Only: No 00:10:04.323 Volatile Memory Backup: OK 00:10:04.323 Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.323 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:04.323 Available Spare: 0% 00:10:04.323 Available Spare Threshold: 0% 00:10:04.323 Life Percentage Used: 0% 00:10:04.323 Data Units Read: 2487 00:10:04.323 Data Units Written: 2274 00:10:04.323 Host Read Commands: 115850 00:10:04.323 Host Write Commands: 114119 00:10:04.323 Controller Busy Time: 0 minutes 00:10:04.323 Power Cycles: 0 00:10:04.323 Power On Hours: 0 hours 00:10:04.323 Unsafe Shutdowns: 0 00:10:04.323 Unrecoverable Media Errors: 0 00:10:04.323 Lifetime Error Log Entries: 0 00:10:04.323 Warning Temperature Time: 0 minutes 00:10:04.323 Critical Temperature Time: 0 minutes 00:10:04.323 00:10:04.323 Number of Queues 00:10:04.323 ================ 00:10:04.323 Number of I/O Submission Queues: 64 00:10:04.323 Number of I/O Completion Queues: 64 00:10:04.323 00:10:04.323 ZNS Specific Controller Data 00:10:04.323 ============================ 00:10:04.323 Zone Append Size Limit: 0 00:10:04.323 00:10:04.323 00:10:04.323 Active Namespaces 00:10:04.323 ================= 00:10:04.323 Namespace ID:1 00:10:04.323 Error Recovery Timeout: Unlimited 00:10:04.323 Command Set Identifier: NVM (00h) 00:10:04.323 Deallocate: Supported 00:10:04.323 Deallocated/Unwritten Error: Supported 00:10:04.323 Deallocated Read Value: All 0x00 00:10:04.323 Deallocate in Write Zeroes: Not Supported 00:10:04.323 Deallocated Guard Field: 0xFFFF 00:10:04.323 Flush: Supported 00:10:04.323 Reservation: Not Supported 00:10:04.323 Namespace Sharing Capabilities: Private 00:10:04.323 Size (in LBAs): 1048576 (4GiB) 00:10:04.323 Capacity (in LBAs): 1048576 (4GiB) 00:10:04.323 Utilization (in LBAs): 1048576 (4GiB) 00:10:04.323 Thin Provisioning: Not Supported 00:10:04.323 Per-NS Atomic Units: No 00:10:04.323 Maximum Single Source Range Length: 128 00:10:04.323 Maximum Copy Length: 128 00:10:04.323 Maximum Source Range Count: 128 00:10:04.323 NGUID/EUI64 Never Reused: No 00:10:04.323 Namespace Write Protected: No 00:10:04.323 Number of LBA Formats: 8 00:10:04.323 Current LBA Format: LBA Format #04 00:10:04.323 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.323 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.323 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.323 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.323 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.323 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.323 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.323 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.323 00:10:04.323 NVM Specific Namespace Data 00:10:04.323 =========================== 00:10:04.323 Logical Block Storage Tag Mask: 0 00:10:04.323 Protection Information Capabilities: 00:10:04.323 16b Guard Protection Information Storage Tag Support: No 00:10:04.323 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.323 Storage Tag Check Read Support: No 00:10:04.323 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.323 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Namespace ID:2 00:10:04.324 Error Recovery Timeout: Unlimited 00:10:04.324 Command Set Identifier: NVM (00h) 00:10:04.324 Deallocate: Supported 00:10:04.324 Deallocated/Unwritten Error: Supported 00:10:04.324 Deallocated Read Value: All 0x00 00:10:04.324 Deallocate in Write Zeroes: Not Supported 00:10:04.324 Deallocated Guard Field: 0xFFFF 00:10:04.324 Flush: Supported 00:10:04.324 Reservation: Not Supported 00:10:04.324 Namespace Sharing Capabilities: Private 00:10:04.324 Size (in LBAs): 1048576 (4GiB) 00:10:04.324 Capacity (in LBAs): 1048576 (4GiB) 00:10:04.324 Utilization (in LBAs): 1048576 (4GiB) 00:10:04.324 Thin Provisioning: Not Supported 00:10:04.324 Per-NS Atomic Units: No 00:10:04.324 Maximum Single Source Range Length: 128 00:10:04.324 Maximum Copy Length: 128 00:10:04.324 Maximum Source Range Count: 128 00:10:04.324 NGUID/EUI64 Never Reused: No 00:10:04.324 Namespace Write Protected: No 00:10:04.324 Number of LBA Formats: 8 00:10:04.324 Current LBA Format: LBA Format #04 00:10:04.324 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.324 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.324 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.324 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.324 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.324 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.324 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.324 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.324 00:10:04.324 NVM Specific Namespace Data 00:10:04.324 =========================== 00:10:04.324 Logical Block Storage Tag Mask: 0 00:10:04.324 Protection Information Capabilities: 00:10:04.324 16b Guard Protection Information Storage Tag Support: No 00:10:04.324 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.324 Storage Tag Check Read Support: No 00:10:04.324 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Namespace ID:3 00:10:04.324 Error Recovery Timeout: Unlimited 00:10:04.324 Command Set Identifier: NVM (00h) 00:10:04.324 Deallocate: Supported 00:10:04.324 Deallocated/Unwritten Error: Supported 00:10:04.324 Deallocated Read Value: All 0x00 00:10:04.324 Deallocate in Write Zeroes: Not Supported 00:10:04.324 Deallocated Guard Field: 0xFFFF 00:10:04.324 Flush: Supported 00:10:04.324 Reservation: Not Supported 00:10:04.324 Namespace Sharing Capabilities: Private 00:10:04.324 Size (in LBAs): 1048576 (4GiB) 00:10:04.324 Capacity (in LBAs): 1048576 (4GiB) 00:10:04.324 Utilization (in LBAs): 1048576 (4GiB) 00:10:04.324 Thin Provisioning: Not Supported 00:10:04.324 Per-NS Atomic Units: No 00:10:04.324 Maximum Single Source Range Length: 128 00:10:04.324 Maximum Copy Length: 128 00:10:04.324 Maximum Source Range Count: 128 00:10:04.324 NGUID/EUI64 Never Reused: No 00:10:04.324 Namespace Write Protected: No 00:10:04.324 Number of LBA Formats: 8 00:10:04.324 Current LBA Format: LBA Format #04 00:10:04.324 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.324 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.324 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.324 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.324 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.324 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.324 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.324 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.324 00:10:04.324 NVM Specific Namespace Data 00:10:04.324 =========================== 00:10:04.324 Logical Block Storage Tag Mask: 0 00:10:04.324 Protection Information Capabilities: 00:10:04.324 16b Guard Protection Information Storage Tag Support: No 00:10:04.324 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.324 Storage Tag Check Read Support: No 00:10:04.324 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.324 17:59:33 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:04.324 17:59:33 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:04.584 ===================================================== 00:10:04.584 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:04.584 ===================================================== 00:10:04.584 Controller Capabilities/Features 00:10:04.584 ================================ 00:10:04.584 Vendor ID: 1b36 00:10:04.584 Subsystem Vendor ID: 1af4 00:10:04.584 Serial Number: 12340 00:10:04.584 Model Number: QEMU NVMe Ctrl 00:10:04.584 Firmware Version: 8.0.0 00:10:04.584 Recommended Arb Burst: 6 00:10:04.584 IEEE OUI Identifier: 00 54 52 00:10:04.584 Multi-path I/O 00:10:04.584 May have multiple subsystem ports: No 00:10:04.584 May have multiple controllers: No 00:10:04.584 Associated with SR-IOV VF: No 00:10:04.584 Max Data Transfer Size: 524288 00:10:04.584 Max Number of Namespaces: 256 00:10:04.584 Max Number of I/O Queues: 64 00:10:04.584 NVMe Specification Version (VS): 1.4 00:10:04.584 NVMe Specification Version (Identify): 1.4 00:10:04.584 Maximum Queue Entries: 2048 00:10:04.584 Contiguous Queues Required: Yes 00:10:04.584 Arbitration Mechanisms Supported 00:10:04.584 Weighted Round Robin: Not Supported 00:10:04.584 Vendor Specific: Not Supported 00:10:04.584 Reset Timeout: 7500 ms 00:10:04.584 Doorbell Stride: 4 bytes 00:10:04.584 NVM Subsystem Reset: Not Supported 00:10:04.584 Command Sets Supported 00:10:04.584 NVM Command Set: Supported 00:10:04.584 Boot Partition: Not Supported 00:10:04.584 Memory Page Size Minimum: 4096 bytes 00:10:04.584 Memory Page Size Maximum: 65536 bytes 00:10:04.584 Persistent Memory Region: Not Supported 00:10:04.584 Optional Asynchronous Events Supported 00:10:04.584 Namespace Attribute Notices: Supported 00:10:04.584 Firmware Activation Notices: Not Supported 00:10:04.584 ANA Change Notices: Not Supported 00:10:04.584 PLE Aggregate Log Change Notices: Not Supported 00:10:04.584 LBA Status Info Alert Notices: Not Supported 00:10:04.584 EGE Aggregate Log Change Notices: Not Supported 00:10:04.584 Normal NVM Subsystem Shutdown event: Not Supported 00:10:04.584 Zone Descriptor Change Notices: Not Supported 00:10:04.584 Discovery Log Change Notices: Not Supported 00:10:04.584 Controller Attributes 00:10:04.584 128-bit Host Identifier: Not Supported 00:10:04.584 Non-Operational Permissive Mode: Not Supported 00:10:04.584 NVM Sets: Not Supported 00:10:04.584 Read Recovery Levels: Not Supported 00:10:04.584 Endurance Groups: Not Supported 00:10:04.584 Predictable Latency Mode: Not Supported 00:10:04.584 Traffic Based Keep ALive: Not Supported 00:10:04.584 Namespace Granularity: Not Supported 00:10:04.584 SQ Associations: Not Supported 00:10:04.584 UUID List: Not Supported 00:10:04.584 Multi-Domain Subsystem: Not Supported 00:10:04.584 Fixed Capacity Management: Not Supported 00:10:04.584 Variable Capacity Management: Not Supported 00:10:04.584 Delete Endurance Group: Not Supported 00:10:04.584 Delete NVM Set: Not Supported 00:10:04.584 Extended LBA Formats Supported: Supported 00:10:04.584 Flexible Data Placement Supported: Not Supported 00:10:04.584 00:10:04.584 Controller Memory Buffer Support 00:10:04.584 ================================ 00:10:04.584 Supported: No 00:10:04.584 00:10:04.584 Persistent Memory Region Support 00:10:04.584 ================================ 00:10:04.584 Supported: No 00:10:04.584 00:10:04.584 Admin Command Set Attributes 00:10:04.584 ============================ 00:10:04.584 Security Send/Receive: Not Supported 00:10:04.584 Format NVM: Supported 00:10:04.584 Firmware Activate/Download: Not Supported 00:10:04.584 Namespace Management: Supported 00:10:04.584 Device Self-Test: Not Supported 00:10:04.584 Directives: Supported 00:10:04.584 NVMe-MI: Not Supported 00:10:04.584 Virtualization Management: Not Supported 00:10:04.584 Doorbell Buffer Config: Supported 00:10:04.584 Get LBA Status Capability: Not Supported 00:10:04.584 Command & Feature Lockdown Capability: Not Supported 00:10:04.584 Abort Command Limit: 4 00:10:04.584 Async Event Request Limit: 4 00:10:04.584 Number of Firmware Slots: N/A 00:10:04.584 Firmware Slot 1 Read-Only: N/A 00:10:04.585 Firmware Activation Without Reset: N/A 00:10:04.585 Multiple Update Detection Support: N/A 00:10:04.585 Firmware Update Granularity: No Information Provided 00:10:04.585 Per-Namespace SMART Log: Yes 00:10:04.585 Asymmetric Namespace Access Log Page: Not Supported 00:10:04.585 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:04.585 Command Effects Log Page: Supported 00:10:04.585 Get Log Page Extended Data: Supported 00:10:04.585 Telemetry Log Pages: Not Supported 00:10:04.585 Persistent Event Log Pages: Not Supported 00:10:04.585 Supported Log Pages Log Page: May Support 00:10:04.585 Commands Supported & Effects Log Page: Not Supported 00:10:04.585 Feature Identifiers & Effects Log Page:May Support 00:10:04.585 NVMe-MI Commands & Effects Log Page: May Support 00:10:04.585 Data Area 4 for Telemetry Log: Not Supported 00:10:04.585 Error Log Page Entries Supported: 1 00:10:04.585 Keep Alive: Not Supported 00:10:04.585 00:10:04.585 NVM Command Set Attributes 00:10:04.585 ========================== 00:10:04.585 Submission Queue Entry Size 00:10:04.585 Max: 64 00:10:04.585 Min: 64 00:10:04.585 Completion Queue Entry Size 00:10:04.585 Max: 16 00:10:04.585 Min: 16 00:10:04.585 Number of Namespaces: 256 00:10:04.585 Compare Command: Supported 00:10:04.585 Write Uncorrectable Command: Not Supported 00:10:04.585 Dataset Management Command: Supported 00:10:04.585 Write Zeroes Command: Supported 00:10:04.585 Set Features Save Field: Supported 00:10:04.585 Reservations: Not Supported 00:10:04.585 Timestamp: Supported 00:10:04.585 Copy: Supported 00:10:04.585 Volatile Write Cache: Present 00:10:04.585 Atomic Write Unit (Normal): 1 00:10:04.585 Atomic Write Unit (PFail): 1 00:10:04.585 Atomic Compare & Write Unit: 1 00:10:04.585 Fused Compare & Write: Not Supported 00:10:04.585 Scatter-Gather List 00:10:04.585 SGL Command Set: Supported 00:10:04.585 SGL Keyed: Not Supported 00:10:04.585 SGL Bit Bucket Descriptor: Not Supported 00:10:04.585 SGL Metadata Pointer: Not Supported 00:10:04.585 Oversized SGL: Not Supported 00:10:04.585 SGL Metadata Address: Not Supported 00:10:04.585 SGL Offset: Not Supported 00:10:04.585 Transport SGL Data Block: Not Supported 00:10:04.585 Replay Protected Memory Block: Not Supported 00:10:04.585 00:10:04.585 Firmware Slot Information 00:10:04.585 ========================= 00:10:04.585 Active slot: 1 00:10:04.585 Slot 1 Firmware Revision: 1.0 00:10:04.585 00:10:04.585 00:10:04.585 Commands Supported and Effects 00:10:04.585 ============================== 00:10:04.585 Admin Commands 00:10:04.585 -------------- 00:10:04.585 Delete I/O Submission Queue (00h): Supported 00:10:04.585 Create I/O Submission Queue (01h): Supported 00:10:04.585 Get Log Page (02h): Supported 00:10:04.585 Delete I/O Completion Queue (04h): Supported 00:10:04.585 Create I/O Completion Queue (05h): Supported 00:10:04.585 Identify (06h): Supported 00:10:04.585 Abort (08h): Supported 00:10:04.585 Set Features (09h): Supported 00:10:04.585 Get Features (0Ah): Supported 00:10:04.585 Asynchronous Event Request (0Ch): Supported 00:10:04.585 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:04.585 Directive Send (19h): Supported 00:10:04.585 Directive Receive (1Ah): Supported 00:10:04.585 Virtualization Management (1Ch): Supported 00:10:04.585 Doorbell Buffer Config (7Ch): Supported 00:10:04.585 Format NVM (80h): Supported LBA-Change 00:10:04.585 I/O Commands 00:10:04.585 ------------ 00:10:04.585 Flush (00h): Supported LBA-Change 00:10:04.585 Write (01h): Supported LBA-Change 00:10:04.585 Read (02h): Supported 00:10:04.585 Compare (05h): Supported 00:10:04.585 Write Zeroes (08h): Supported LBA-Change 00:10:04.585 Dataset Management (09h): Supported LBA-Change 00:10:04.585 Unknown (0Ch): Supported 00:10:04.585 Unknown (12h): Supported 00:10:04.585 Copy (19h): Supported LBA-Change 00:10:04.585 Unknown (1Dh): Supported LBA-Change 00:10:04.585 00:10:04.585 Error Log 00:10:04.585 ========= 00:10:04.585 00:10:04.585 Arbitration 00:10:04.585 =========== 00:10:04.585 Arbitration Burst: no limit 00:10:04.585 00:10:04.585 Power Management 00:10:04.585 ================ 00:10:04.585 Number of Power States: 1 00:10:04.585 Current Power State: Power State #0 00:10:04.585 Power State #0: 00:10:04.585 Max Power: 25.00 W 00:10:04.585 Non-Operational State: Operational 00:10:04.585 Entry Latency: 16 microseconds 00:10:04.585 Exit Latency: 4 microseconds 00:10:04.585 Relative Read Throughput: 0 00:10:04.585 Relative Read Latency: 0 00:10:04.585 Relative Write Throughput: 0 00:10:04.585 Relative Write Latency: 0 00:10:04.585 Idle Power: Not Reported 00:10:04.585 Active Power: Not Reported 00:10:04.585 Non-Operational Permissive Mode: Not Supported 00:10:04.585 00:10:04.585 Health Information 00:10:04.585 ================== 00:10:04.585 Critical Warnings: 00:10:04.585 Available Spare Space: OK 00:10:04.585 Temperature: OK 00:10:04.585 Device Reliability: OK 00:10:04.585 Read Only: No 00:10:04.585 Volatile Memory Backup: OK 00:10:04.585 Current Temperature: 323 Kelvin (50 Celsius) 00:10:04.585 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:04.585 Available Spare: 0% 00:10:04.585 Available Spare Threshold: 0% 00:10:04.585 Life Percentage Used: 0% 00:10:04.585 Data Units Read: 784 00:10:04.585 Data Units Written: 712 00:10:04.585 Host Read Commands: 37886 00:10:04.585 Host Write Commands: 37672 00:10:04.585 Controller Busy Time: 0 minutes 00:10:04.585 Power Cycles: 0 00:10:04.585 Power On Hours: 0 hours 00:10:04.585 Unsafe Shutdowns: 0 00:10:04.585 Unrecoverable Media Errors: 0 00:10:04.585 Lifetime Error Log Entries: 0 00:10:04.585 Warning Temperature Time: 0 minutes 00:10:04.585 Critical Temperature Time: 0 minutes 00:10:04.585 00:10:04.585 Number of Queues 00:10:04.586 ================ 00:10:04.586 Number of I/O Submission Queues: 64 00:10:04.586 Number of I/O Completion Queues: 64 00:10:04.586 00:10:04.586 ZNS Specific Controller Data 00:10:04.586 ============================ 00:10:04.586 Zone Append Size Limit: 0 00:10:04.586 00:10:04.586 00:10:04.586 Active Namespaces 00:10:04.586 ================= 00:10:04.586 Namespace ID:1 00:10:04.586 Error Recovery Timeout: Unlimited 00:10:04.586 Command Set Identifier: NVM (00h) 00:10:04.586 Deallocate: Supported 00:10:04.586 Deallocated/Unwritten Error: Supported 00:10:04.586 Deallocated Read Value: All 0x00 00:10:04.586 Deallocate in Write Zeroes: Not Supported 00:10:04.586 Deallocated Guard Field: 0xFFFF 00:10:04.586 Flush: Supported 00:10:04.586 Reservation: Not Supported 00:10:04.586 Metadata Transferred as: Separate Metadata Buffer 00:10:04.586 Namespace Sharing Capabilities: Private 00:10:04.586 Size (in LBAs): 1548666 (5GiB) 00:10:04.586 Capacity (in LBAs): 1548666 (5GiB) 00:10:04.586 Utilization (in LBAs): 1548666 (5GiB) 00:10:04.586 Thin Provisioning: Not Supported 00:10:04.586 Per-NS Atomic Units: No 00:10:04.586 Maximum Single Source Range Length: 128 00:10:04.586 Maximum Copy Length: 128 00:10:04.586 Maximum Source Range Count: 128 00:10:04.586 NGUID/EUI64 Never Reused: No 00:10:04.586 Namespace Write Protected: No 00:10:04.586 Number of LBA Formats: 8 00:10:04.586 Current LBA Format: LBA Format #07 00:10:04.586 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:04.586 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:04.586 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:04.586 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:04.586 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:04.586 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:04.586 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:04.586 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:04.586 00:10:04.586 NVM Specific Namespace Data 00:10:04.586 =========================== 00:10:04.586 Logical Block Storage Tag Mask: 0 00:10:04.586 Protection Information Capabilities: 00:10:04.586 16b Guard Protection Information Storage Tag Support: No 00:10:04.586 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:04.586 Storage Tag Check Read Support: No 00:10:04.586 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.586 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.586 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.586 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.586 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.586 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.586 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.586 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:04.586 17:59:33 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:04.586 17:59:33 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:04.846 ===================================================== 00:10:04.846 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:04.846 ===================================================== 00:10:04.846 Controller Capabilities/Features 00:10:04.846 ================================ 00:10:04.846 Vendor ID: 1b36 00:10:04.846 Subsystem Vendor ID: 1af4 00:10:04.846 Serial Number: 12341 00:10:04.846 Model Number: QEMU NVMe Ctrl 00:10:04.846 Firmware Version: 8.0.0 00:10:04.846 Recommended Arb Burst: 6 00:10:04.846 IEEE OUI Identifier: 00 54 52 00:10:04.846 Multi-path I/O 00:10:04.846 May have multiple subsystem ports: No 00:10:04.846 May have multiple controllers: No 00:10:04.846 Associated with SR-IOV VF: No 00:10:04.846 Max Data Transfer Size: 524288 00:10:04.846 Max Number of Namespaces: 256 00:10:04.846 Max Number of I/O Queues: 64 00:10:04.846 NVMe Specification Version (VS): 1.4 00:10:04.846 NVMe Specification Version (Identify): 1.4 00:10:04.846 Maximum Queue Entries: 2048 00:10:04.846 Contiguous Queues Required: Yes 00:10:04.846 Arbitration Mechanisms Supported 00:10:04.846 Weighted Round Robin: Not Supported 00:10:04.846 Vendor Specific: Not Supported 00:10:04.846 Reset Timeout: 7500 ms 00:10:04.846 Doorbell Stride: 4 bytes 00:10:04.846 NVM Subsystem Reset: Not Supported 00:10:04.846 Command Sets Supported 00:10:04.846 NVM Command Set: Supported 00:10:04.846 Boot Partition: Not Supported 00:10:04.846 Memory Page Size Minimum: 4096 bytes 00:10:04.846 Memory Page Size Maximum: 65536 bytes 00:10:04.846 Persistent Memory Region: Not Supported 00:10:04.846 Optional Asynchronous Events Supported 00:10:04.846 Namespace Attribute Notices: Supported 00:10:04.846 Firmware Activation Notices: Not Supported 00:10:04.846 ANA Change Notices: Not Supported 00:10:04.846 PLE Aggregate Log Change Notices: Not Supported 00:10:04.846 LBA Status Info Alert Notices: Not Supported 00:10:04.846 EGE Aggregate Log Change Notices: Not Supported 00:10:04.846 Normal NVM Subsystem Shutdown event: Not Supported 00:10:04.846 Zone Descriptor Change Notices: Not Supported 00:10:04.846 Discovery Log Change Notices: Not Supported 00:10:04.846 Controller Attributes 00:10:04.846 128-bit Host Identifier: Not Supported 00:10:04.846 Non-Operational Permissive Mode: Not Supported 00:10:04.846 NVM Sets: Not Supported 00:10:04.846 Read Recovery Levels: Not Supported 00:10:04.846 Endurance Groups: Not Supported 00:10:04.846 Predictable Latency Mode: Not Supported 00:10:04.846 Traffic Based Keep ALive: Not Supported 00:10:04.846 Namespace Granularity: Not Supported 00:10:04.846 SQ Associations: Not Supported 00:10:04.846 UUID List: Not Supported 00:10:04.846 Multi-Domain Subsystem: Not Supported 00:10:04.846 Fixed Capacity Management: Not Supported 00:10:04.846 Variable Capacity Management: Not Supported 00:10:04.846 Delete Endurance Group: Not Supported 00:10:04.846 Delete NVM Set: Not Supported 00:10:04.846 Extended LBA Formats Supported: Supported 00:10:04.846 Flexible Data Placement Supported: Not Supported 00:10:04.846 00:10:04.846 Controller Memory Buffer Support 00:10:04.846 ================================ 00:10:04.846 Supported: No 00:10:04.846 00:10:04.847 Persistent Memory Region Support 00:10:04.847 ================================ 00:10:04.847 Supported: No 00:10:04.847 00:10:04.847 Admin Command Set Attributes 00:10:04.847 ============================ 00:10:04.847 Security Send/Receive: Not Supported 00:10:04.847 Format NVM: Supported 00:10:04.847 Firmware Activate/Download: Not Supported 00:10:04.847 Namespace Management: Supported 00:10:04.847 Device Self-Test: Not Supported 00:10:04.847 Directives: Supported 00:10:04.847 NVMe-MI: Not Supported 00:10:04.847 Virtualization Management: Not Supported 00:10:04.847 Doorbell Buffer Config: Supported 00:10:04.847 Get LBA Status Capability: Not Supported 00:10:04.847 Command & Feature Lockdown Capability: Not Supported 00:10:04.847 Abort Command Limit: 4 00:10:04.847 Async Event Request Limit: 4 00:10:04.847 Number of Firmware Slots: N/A 00:10:04.847 Firmware Slot 1 Read-Only: N/A 00:10:04.847 Firmware Activation Without Reset: N/A 00:10:04.847 Multiple Update Detection Support: N/A 00:10:04.847 Firmware Update Granularity: No Information Provided 00:10:04.847 Per-Namespace SMART Log: Yes 00:10:04.847 Asymmetric Namespace Access Log Page: Not Supported 00:10:04.847 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:04.847 Command Effects Log Page: Supported 00:10:04.847 Get Log Page Extended Data: Supported 00:10:04.847 Telemetry Log Pages: Not Supported 00:10:04.847 Persistent Event Log Pages: Not Supported 00:10:04.847 Supported Log Pages Log Page: May Support 00:10:04.847 Commands Supported & Effects Log Page: Not Supported 00:10:04.847 Feature Identifiers & Effects Log Page:May Support 00:10:04.847 NVMe-MI Commands & Effects Log Page: May Support 00:10:04.847 Data Area 4 for Telemetry Log: Not Supported 00:10:04.847 Error Log Page Entries Supported: 1 00:10:04.847 Keep Alive: Not Supported 00:10:04.847 00:10:04.847 NVM Command Set Attributes 00:10:04.847 ========================== 00:10:04.847 Submission Queue Entry Size 00:10:04.847 Max: 64 00:10:04.847 Min: 64 00:10:04.847 Completion Queue Entry Size 00:10:04.847 Max: 16 00:10:04.847 Min: 16 00:10:04.847 Number of Namespaces: 256 00:10:04.847 Compare Command: Supported 00:10:04.847 Write Uncorrectable Command: Not Supported 00:10:04.847 Dataset Management Command: Supported 00:10:04.847 Write Zeroes Command: Supported 00:10:04.847 Set Features Save Field: Supported 00:10:04.847 Reservations: Not Supported 00:10:04.847 Timestamp: Supported 00:10:04.847 Copy: Supported 00:10:04.847 Volatile Write Cache: Present 00:10:04.847 Atomic Write Unit (Normal): 1 00:10:04.847 Atomic Write Unit (PFail): 1 00:10:04.847 Atomic Compare & Write Unit: 1 00:10:04.847 Fused Compare & Write: Not Supported 00:10:04.847 Scatter-Gather List 00:10:04.847 SGL Command Set: Supported 00:10:04.847 SGL Keyed: Not Supported 00:10:04.847 SGL Bit Bucket Descriptor: Not Supported 00:10:04.847 SGL Metadata Pointer: Not Supported 00:10:04.847 Oversized SGL: Not Supported 00:10:04.847 SGL Metadata Address: Not Supported 00:10:04.847 SGL Offset: Not Supported 00:10:04.847 Transport SGL Data Block: Not Supported 00:10:04.847 Replay Protected Memory Block: Not Supported 00:10:04.847 00:10:04.847 Firmware Slot Information 00:10:04.847 ========================= 00:10:04.847 Active slot: 1 00:10:04.847 Slot 1 Firmware Revision: 1.0 00:10:04.847 00:10:04.847 00:10:04.847 Commands Supported and Effects 00:10:04.847 ============================== 00:10:04.847 Admin Commands 00:10:04.847 -------------- 00:10:04.847 Delete I/O Submission Queue (00h): Supported 00:10:04.847 Create I/O Submission Queue (01h): Supported 00:10:04.847 Get Log Page (02h): Supported 00:10:04.847 Delete I/O Completion Queue (04h): Supported 00:10:04.847 Create I/O Completion Queue (05h): Supported 00:10:04.847 Identify (06h): Supported 00:10:04.847 Abort (08h): Supported 00:10:04.847 Set Features (09h): Supported 00:10:04.847 Get Features (0Ah): Supported 00:10:04.847 Asynchronous Event Request (0Ch): Supported 00:10:04.847 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:04.847 Directive Send (19h): Supported 00:10:04.847 Directive Receive (1Ah): Supported 00:10:04.847 Virtualization Management (1Ch): Supported 00:10:04.847 Doorbell Buffer Config (7Ch): Supported 00:10:04.847 Format NVM (80h): Supported LBA-Change 00:10:04.847 I/O Commands 00:10:04.847 ------------ 00:10:04.847 Flush (00h): Supported LBA-Change 00:10:04.847 Write (01h): Supported LBA-Change 00:10:04.847 Read (02h): Supported 00:10:04.847 Compare (05h): Supported 00:10:04.847 Write Zeroes (08h): Supported LBA-Change 00:10:04.847 Dataset Management (09h): Supported LBA-Change 00:10:04.847 Unknown (0Ch): Supported 00:10:04.847 Unknown (12h): Supported 00:10:04.847 Copy (19h): Supported LBA-Change 00:10:04.847 Unknown (1Dh): Supported LBA-Change 00:10:04.847 00:10:04.847 Error Log 00:10:04.847 ========= 00:10:04.847 00:10:04.847 Arbitration 00:10:04.847 =========== 00:10:04.847 Arbitration Burst: no limit 00:10:04.847 00:10:04.847 Power Management 00:10:04.847 ================ 00:10:04.847 Number of Power States: 1 00:10:04.847 Current Power State: Power State #0 00:10:04.847 Power State #0: 00:10:04.847 Max Power: 25.00 W 00:10:04.847 Non-Operational State: Operational 00:10:04.847 Entry Latency: 16 microseconds 00:10:04.847 Exit Latency: 4 microseconds 00:10:04.847 Relative Read Throughput: 0 00:10:04.847 Relative Read Latency: 0 00:10:04.847 Relative Write Throughput: 0 00:10:04.847 Relative Write Latency: 0 00:10:05.107 Idle Power: Not Reported 00:10:05.107 Active Power: Not Reported 00:10:05.107 Non-Operational Permissive Mode: Not Supported 00:10:05.107 00:10:05.107 Health Information 00:10:05.107 ================== 00:10:05.107 Critical Warnings: 00:10:05.107 Available Spare Space: OK 00:10:05.107 Temperature: OK 00:10:05.107 Device Reliability: OK 00:10:05.107 Read Only: No 00:10:05.107 Volatile Memory Backup: OK 00:10:05.107 Current Temperature: 323 Kelvin (50 Celsius) 00:10:05.107 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:05.107 Available Spare: 0% 00:10:05.107 Available Spare Threshold: 0% 00:10:05.107 Life Percentage Used: 0% 00:10:05.107 Data Units Read: 1187 00:10:05.107 Data Units Written: 1054 00:10:05.107 Host Read Commands: 56057 00:10:05.107 Host Write Commands: 54862 00:10:05.107 Controller Busy Time: 0 minutes 00:10:05.107 Power Cycles: 0 00:10:05.107 Power On Hours: 0 hours 00:10:05.107 Unsafe Shutdowns: 0 00:10:05.107 Unrecoverable Media Errors: 0 00:10:05.107 Lifetime Error Log Entries: 0 00:10:05.107 Warning Temperature Time: 0 minutes 00:10:05.107 Critical Temperature Time: 0 minutes 00:10:05.107 00:10:05.107 Number of Queues 00:10:05.107 ================ 00:10:05.107 Number of I/O Submission Queues: 64 00:10:05.107 Number of I/O Completion Queues: 64 00:10:05.107 00:10:05.107 ZNS Specific Controller Data 00:10:05.107 ============================ 00:10:05.107 Zone Append Size Limit: 0 00:10:05.107 00:10:05.107 00:10:05.107 Active Namespaces 00:10:05.107 ================= 00:10:05.107 Namespace ID:1 00:10:05.107 Error Recovery Timeout: Unlimited 00:10:05.107 Command Set Identifier: NVM (00h) 00:10:05.107 Deallocate: Supported 00:10:05.107 Deallocated/Unwritten Error: Supported 00:10:05.107 Deallocated Read Value: All 0x00 00:10:05.107 Deallocate in Write Zeroes: Not Supported 00:10:05.107 Deallocated Guard Field: 0xFFFF 00:10:05.107 Flush: Supported 00:10:05.107 Reservation: Not Supported 00:10:05.107 Namespace Sharing Capabilities: Private 00:10:05.107 Size (in LBAs): 1310720 (5GiB) 00:10:05.107 Capacity (in LBAs): 1310720 (5GiB) 00:10:05.107 Utilization (in LBAs): 1310720 (5GiB) 00:10:05.107 Thin Provisioning: Not Supported 00:10:05.107 Per-NS Atomic Units: No 00:10:05.107 Maximum Single Source Range Length: 128 00:10:05.107 Maximum Copy Length: 128 00:10:05.107 Maximum Source Range Count: 128 00:10:05.107 NGUID/EUI64 Never Reused: No 00:10:05.107 Namespace Write Protected: No 00:10:05.107 Number of LBA Formats: 8 00:10:05.107 Current LBA Format: LBA Format #04 00:10:05.107 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:05.107 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:05.107 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:05.107 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:05.107 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:05.107 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:05.107 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:05.107 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:05.107 00:10:05.107 NVM Specific Namespace Data 00:10:05.107 =========================== 00:10:05.107 Logical Block Storage Tag Mask: 0 00:10:05.107 Protection Information Capabilities: 00:10:05.107 16b Guard Protection Information Storage Tag Support: No 00:10:05.107 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:05.107 Storage Tag Check Read Support: No 00:10:05.107 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.107 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.107 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.107 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.107 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.107 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.107 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.107 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.107 17:59:34 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:05.107 17:59:34 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:05.367 ===================================================== 00:10:05.367 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:05.367 ===================================================== 00:10:05.367 Controller Capabilities/Features 00:10:05.367 ================================ 00:10:05.367 Vendor ID: 1b36 00:10:05.367 Subsystem Vendor ID: 1af4 00:10:05.367 Serial Number: 12342 00:10:05.367 Model Number: QEMU NVMe Ctrl 00:10:05.367 Firmware Version: 8.0.0 00:10:05.367 Recommended Arb Burst: 6 00:10:05.367 IEEE OUI Identifier: 00 54 52 00:10:05.367 Multi-path I/O 00:10:05.367 May have multiple subsystem ports: No 00:10:05.367 May have multiple controllers: No 00:10:05.367 Associated with SR-IOV VF: No 00:10:05.367 Max Data Transfer Size: 524288 00:10:05.367 Max Number of Namespaces: 256 00:10:05.367 Max Number of I/O Queues: 64 00:10:05.367 NVMe Specification Version (VS): 1.4 00:10:05.367 NVMe Specification Version (Identify): 1.4 00:10:05.367 Maximum Queue Entries: 2048 00:10:05.367 Contiguous Queues Required: Yes 00:10:05.367 Arbitration Mechanisms Supported 00:10:05.367 Weighted Round Robin: Not Supported 00:10:05.367 Vendor Specific: Not Supported 00:10:05.367 Reset Timeout: 7500 ms 00:10:05.367 Doorbell Stride: 4 bytes 00:10:05.367 NVM Subsystem Reset: Not Supported 00:10:05.367 Command Sets Supported 00:10:05.367 NVM Command Set: Supported 00:10:05.367 Boot Partition: Not Supported 00:10:05.367 Memory Page Size Minimum: 4096 bytes 00:10:05.367 Memory Page Size Maximum: 65536 bytes 00:10:05.367 Persistent Memory Region: Not Supported 00:10:05.367 Optional Asynchronous Events Supported 00:10:05.367 Namespace Attribute Notices: Supported 00:10:05.367 Firmware Activation Notices: Not Supported 00:10:05.367 ANA Change Notices: Not Supported 00:10:05.367 PLE Aggregate Log Change Notices: Not Supported 00:10:05.367 LBA Status Info Alert Notices: Not Supported 00:10:05.367 EGE Aggregate Log Change Notices: Not Supported 00:10:05.367 Normal NVM Subsystem Shutdown event: Not Supported 00:10:05.368 Zone Descriptor Change Notices: Not Supported 00:10:05.368 Discovery Log Change Notices: Not Supported 00:10:05.368 Controller Attributes 00:10:05.368 128-bit Host Identifier: Not Supported 00:10:05.368 Non-Operational Permissive Mode: Not Supported 00:10:05.368 NVM Sets: Not Supported 00:10:05.368 Read Recovery Levels: Not Supported 00:10:05.368 Endurance Groups: Not Supported 00:10:05.368 Predictable Latency Mode: Not Supported 00:10:05.368 Traffic Based Keep ALive: Not Supported 00:10:05.368 Namespace Granularity: Not Supported 00:10:05.368 SQ Associations: Not Supported 00:10:05.368 UUID List: Not Supported 00:10:05.368 Multi-Domain Subsystem: Not Supported 00:10:05.368 Fixed Capacity Management: Not Supported 00:10:05.368 Variable Capacity Management: Not Supported 00:10:05.368 Delete Endurance Group: Not Supported 00:10:05.368 Delete NVM Set: Not Supported 00:10:05.368 Extended LBA Formats Supported: Supported 00:10:05.368 Flexible Data Placement Supported: Not Supported 00:10:05.368 00:10:05.368 Controller Memory Buffer Support 00:10:05.368 ================================ 00:10:05.368 Supported: No 00:10:05.368 00:10:05.368 Persistent Memory Region Support 00:10:05.368 ================================ 00:10:05.368 Supported: No 00:10:05.368 00:10:05.368 Admin Command Set Attributes 00:10:05.368 ============================ 00:10:05.368 Security Send/Receive: Not Supported 00:10:05.368 Format NVM: Supported 00:10:05.368 Firmware Activate/Download: Not Supported 00:10:05.368 Namespace Management: Supported 00:10:05.368 Device Self-Test: Not Supported 00:10:05.368 Directives: Supported 00:10:05.368 NVMe-MI: Not Supported 00:10:05.368 Virtualization Management: Not Supported 00:10:05.368 Doorbell Buffer Config: Supported 00:10:05.368 Get LBA Status Capability: Not Supported 00:10:05.368 Command & Feature Lockdown Capability: Not Supported 00:10:05.368 Abort Command Limit: 4 00:10:05.368 Async Event Request Limit: 4 00:10:05.368 Number of Firmware Slots: N/A 00:10:05.368 Firmware Slot 1 Read-Only: N/A 00:10:05.368 Firmware Activation Without Reset: N/A 00:10:05.368 Multiple Update Detection Support: N/A 00:10:05.368 Firmware Update Granularity: No Information Provided 00:10:05.368 Per-Namespace SMART Log: Yes 00:10:05.368 Asymmetric Namespace Access Log Page: Not Supported 00:10:05.368 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:05.368 Command Effects Log Page: Supported 00:10:05.368 Get Log Page Extended Data: Supported 00:10:05.368 Telemetry Log Pages: Not Supported 00:10:05.368 Persistent Event Log Pages: Not Supported 00:10:05.368 Supported Log Pages Log Page: May Support 00:10:05.368 Commands Supported & Effects Log Page: Not Supported 00:10:05.368 Feature Identifiers & Effects Log Page:May Support 00:10:05.368 NVMe-MI Commands & Effects Log Page: May Support 00:10:05.368 Data Area 4 for Telemetry Log: Not Supported 00:10:05.368 Error Log Page Entries Supported: 1 00:10:05.368 Keep Alive: Not Supported 00:10:05.368 00:10:05.368 NVM Command Set Attributes 00:10:05.368 ========================== 00:10:05.368 Submission Queue Entry Size 00:10:05.368 Max: 64 00:10:05.368 Min: 64 00:10:05.368 Completion Queue Entry Size 00:10:05.368 Max: 16 00:10:05.368 Min: 16 00:10:05.368 Number of Namespaces: 256 00:10:05.368 Compare Command: Supported 00:10:05.368 Write Uncorrectable Command: Not Supported 00:10:05.368 Dataset Management Command: Supported 00:10:05.368 Write Zeroes Command: Supported 00:10:05.368 Set Features Save Field: Supported 00:10:05.368 Reservations: Not Supported 00:10:05.368 Timestamp: Supported 00:10:05.368 Copy: Supported 00:10:05.368 Volatile Write Cache: Present 00:10:05.368 Atomic Write Unit (Normal): 1 00:10:05.368 Atomic Write Unit (PFail): 1 00:10:05.368 Atomic Compare & Write Unit: 1 00:10:05.368 Fused Compare & Write: Not Supported 00:10:05.368 Scatter-Gather List 00:10:05.368 SGL Command Set: Supported 00:10:05.368 SGL Keyed: Not Supported 00:10:05.368 SGL Bit Bucket Descriptor: Not Supported 00:10:05.368 SGL Metadata Pointer: Not Supported 00:10:05.368 Oversized SGL: Not Supported 00:10:05.368 SGL Metadata Address: Not Supported 00:10:05.368 SGL Offset: Not Supported 00:10:05.368 Transport SGL Data Block: Not Supported 00:10:05.368 Replay Protected Memory Block: Not Supported 00:10:05.368 00:10:05.368 Firmware Slot Information 00:10:05.368 ========================= 00:10:05.368 Active slot: 1 00:10:05.368 Slot 1 Firmware Revision: 1.0 00:10:05.368 00:10:05.368 00:10:05.368 Commands Supported and Effects 00:10:05.368 ============================== 00:10:05.368 Admin Commands 00:10:05.368 -------------- 00:10:05.368 Delete I/O Submission Queue (00h): Supported 00:10:05.368 Create I/O Submission Queue (01h): Supported 00:10:05.368 Get Log Page (02h): Supported 00:10:05.368 Delete I/O Completion Queue (04h): Supported 00:10:05.368 Create I/O Completion Queue (05h): Supported 00:10:05.368 Identify (06h): Supported 00:10:05.368 Abort (08h): Supported 00:10:05.368 Set Features (09h): Supported 00:10:05.368 Get Features (0Ah): Supported 00:10:05.368 Asynchronous Event Request (0Ch): Supported 00:10:05.368 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:05.368 Directive Send (19h): Supported 00:10:05.368 Directive Receive (1Ah): Supported 00:10:05.368 Virtualization Management (1Ch): Supported 00:10:05.368 Doorbell Buffer Config (7Ch): Supported 00:10:05.368 Format NVM (80h): Supported LBA-Change 00:10:05.368 I/O Commands 00:10:05.368 ------------ 00:10:05.368 Flush (00h): Supported LBA-Change 00:10:05.368 Write (01h): Supported LBA-Change 00:10:05.368 Read (02h): Supported 00:10:05.368 Compare (05h): Supported 00:10:05.368 Write Zeroes (08h): Supported LBA-Change 00:10:05.368 Dataset Management (09h): Supported LBA-Change 00:10:05.368 Unknown (0Ch): Supported 00:10:05.368 Unknown (12h): Supported 00:10:05.368 Copy (19h): Supported LBA-Change 00:10:05.368 Unknown (1Dh): Supported LBA-Change 00:10:05.368 00:10:05.368 Error Log 00:10:05.368 ========= 00:10:05.368 00:10:05.368 Arbitration 00:10:05.368 =========== 00:10:05.368 Arbitration Burst: no limit 00:10:05.368 00:10:05.368 Power Management 00:10:05.368 ================ 00:10:05.368 Number of Power States: 1 00:10:05.368 Current Power State: Power State #0 00:10:05.368 Power State #0: 00:10:05.368 Max Power: 25.00 W 00:10:05.368 Non-Operational State: Operational 00:10:05.368 Entry Latency: 16 microseconds 00:10:05.368 Exit Latency: 4 microseconds 00:10:05.368 Relative Read Throughput: 0 00:10:05.368 Relative Read Latency: 0 00:10:05.368 Relative Write Throughput: 0 00:10:05.368 Relative Write Latency: 0 00:10:05.368 Idle Power: Not Reported 00:10:05.368 Active Power: Not Reported 00:10:05.368 Non-Operational Permissive Mode: Not Supported 00:10:05.368 00:10:05.368 Health Information 00:10:05.368 ================== 00:10:05.368 Critical Warnings: 00:10:05.368 Available Spare Space: OK 00:10:05.368 Temperature: OK 00:10:05.368 Device Reliability: OK 00:10:05.368 Read Only: No 00:10:05.368 Volatile Memory Backup: OK 00:10:05.368 Current Temperature: 323 Kelvin (50 Celsius) 00:10:05.368 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:05.368 Available Spare: 0% 00:10:05.368 Available Spare Threshold: 0% 00:10:05.368 Life Percentage Used: 0% 00:10:05.368 Data Units Read: 2487 00:10:05.368 Data Units Written: 2274 00:10:05.368 Host Read Commands: 115850 00:10:05.368 Host Write Commands: 114119 00:10:05.368 Controller Busy Time: 0 minutes 00:10:05.368 Power Cycles: 0 00:10:05.368 Power On Hours: 0 hours 00:10:05.368 Unsafe Shutdowns: 0 00:10:05.368 Unrecoverable Media Errors: 0 00:10:05.368 Lifetime Error Log Entries: 0 00:10:05.368 Warning Temperature Time: 0 minutes 00:10:05.368 Critical Temperature Time: 0 minutes 00:10:05.368 00:10:05.368 Number of Queues 00:10:05.368 ================ 00:10:05.368 Number of I/O Submission Queues: 64 00:10:05.368 Number of I/O Completion Queues: 64 00:10:05.368 00:10:05.368 ZNS Specific Controller Data 00:10:05.368 ============================ 00:10:05.368 Zone Append Size Limit: 0 00:10:05.368 00:10:05.368 00:10:05.368 Active Namespaces 00:10:05.368 ================= 00:10:05.368 Namespace ID:1 00:10:05.368 Error Recovery Timeout: Unlimited 00:10:05.368 Command Set Identifier: NVM (00h) 00:10:05.368 Deallocate: Supported 00:10:05.368 Deallocated/Unwritten Error: Supported 00:10:05.368 Deallocated Read Value: All 0x00 00:10:05.368 Deallocate in Write Zeroes: Not Supported 00:10:05.368 Deallocated Guard Field: 0xFFFF 00:10:05.368 Flush: Supported 00:10:05.368 Reservation: Not Supported 00:10:05.368 Namespace Sharing Capabilities: Private 00:10:05.368 Size (in LBAs): 1048576 (4GiB) 00:10:05.368 Capacity (in LBAs): 1048576 (4GiB) 00:10:05.368 Utilization (in LBAs): 1048576 (4GiB) 00:10:05.368 Thin Provisioning: Not Supported 00:10:05.369 Per-NS Atomic Units: No 00:10:05.369 Maximum Single Source Range Length: 128 00:10:05.369 Maximum Copy Length: 128 00:10:05.369 Maximum Source Range Count: 128 00:10:05.369 NGUID/EUI64 Never Reused: No 00:10:05.369 Namespace Write Protected: No 00:10:05.369 Number of LBA Formats: 8 00:10:05.369 Current LBA Format: LBA Format #04 00:10:05.369 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:05.369 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:05.369 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:05.369 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:05.369 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:05.369 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:05.369 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:05.369 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:05.369 00:10:05.369 NVM Specific Namespace Data 00:10:05.369 =========================== 00:10:05.369 Logical Block Storage Tag Mask: 0 00:10:05.369 Protection Information Capabilities: 00:10:05.369 16b Guard Protection Information Storage Tag Support: No 00:10:05.369 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:05.369 Storage Tag Check Read Support: No 00:10:05.369 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Namespace ID:2 00:10:05.369 Error Recovery Timeout: Unlimited 00:10:05.369 Command Set Identifier: NVM (00h) 00:10:05.369 Deallocate: Supported 00:10:05.369 Deallocated/Unwritten Error: Supported 00:10:05.369 Deallocated Read Value: All 0x00 00:10:05.369 Deallocate in Write Zeroes: Not Supported 00:10:05.369 Deallocated Guard Field: 0xFFFF 00:10:05.369 Flush: Supported 00:10:05.369 Reservation: Not Supported 00:10:05.369 Namespace Sharing Capabilities: Private 00:10:05.369 Size (in LBAs): 1048576 (4GiB) 00:10:05.369 Capacity (in LBAs): 1048576 (4GiB) 00:10:05.369 Utilization (in LBAs): 1048576 (4GiB) 00:10:05.369 Thin Provisioning: Not Supported 00:10:05.369 Per-NS Atomic Units: No 00:10:05.369 Maximum Single Source Range Length: 128 00:10:05.369 Maximum Copy Length: 128 00:10:05.369 Maximum Source Range Count: 128 00:10:05.369 NGUID/EUI64 Never Reused: No 00:10:05.369 Namespace Write Protected: No 00:10:05.369 Number of LBA Formats: 8 00:10:05.369 Current LBA Format: LBA Format #04 00:10:05.369 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:05.369 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:05.369 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:05.369 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:05.369 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:05.369 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:05.369 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:05.369 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:05.369 00:10:05.369 NVM Specific Namespace Data 00:10:05.369 =========================== 00:10:05.369 Logical Block Storage Tag Mask: 0 00:10:05.369 Protection Information Capabilities: 00:10:05.369 16b Guard Protection Information Storage Tag Support: No 00:10:05.369 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:05.369 Storage Tag Check Read Support: No 00:10:05.369 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Namespace ID:3 00:10:05.369 Error Recovery Timeout: Unlimited 00:10:05.369 Command Set Identifier: NVM (00h) 00:10:05.369 Deallocate: Supported 00:10:05.369 Deallocated/Unwritten Error: Supported 00:10:05.369 Deallocated Read Value: All 0x00 00:10:05.369 Deallocate in Write Zeroes: Not Supported 00:10:05.369 Deallocated Guard Field: 0xFFFF 00:10:05.369 Flush: Supported 00:10:05.369 Reservation: Not Supported 00:10:05.369 Namespace Sharing Capabilities: Private 00:10:05.369 Size (in LBAs): 1048576 (4GiB) 00:10:05.369 Capacity (in LBAs): 1048576 (4GiB) 00:10:05.369 Utilization (in LBAs): 1048576 (4GiB) 00:10:05.369 Thin Provisioning: Not Supported 00:10:05.369 Per-NS Atomic Units: No 00:10:05.369 Maximum Single Source Range Length: 128 00:10:05.369 Maximum Copy Length: 128 00:10:05.369 Maximum Source Range Count: 128 00:10:05.369 NGUID/EUI64 Never Reused: No 00:10:05.369 Namespace Write Protected: No 00:10:05.369 Number of LBA Formats: 8 00:10:05.369 Current LBA Format: LBA Format #04 00:10:05.369 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:05.369 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:05.369 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:05.369 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:05.369 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:05.369 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:05.369 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:05.369 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:05.369 00:10:05.369 NVM Specific Namespace Data 00:10:05.369 =========================== 00:10:05.369 Logical Block Storage Tag Mask: 0 00:10:05.369 Protection Information Capabilities: 00:10:05.369 16b Guard Protection Information Storage Tag Support: No 00:10:05.369 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:05.369 Storage Tag Check Read Support: No 00:10:05.369 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.369 17:59:34 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:05.369 17:59:34 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:05.629 ===================================================== 00:10:05.629 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:05.629 ===================================================== 00:10:05.629 Controller Capabilities/Features 00:10:05.629 ================================ 00:10:05.629 Vendor ID: 1b36 00:10:05.629 Subsystem Vendor ID: 1af4 00:10:05.629 Serial Number: 12343 00:10:05.629 Model Number: QEMU NVMe Ctrl 00:10:05.629 Firmware Version: 8.0.0 00:10:05.629 Recommended Arb Burst: 6 00:10:05.629 IEEE OUI Identifier: 00 54 52 00:10:05.629 Multi-path I/O 00:10:05.629 May have multiple subsystem ports: No 00:10:05.629 May have multiple controllers: Yes 00:10:05.629 Associated with SR-IOV VF: No 00:10:05.629 Max Data Transfer Size: 524288 00:10:05.629 Max Number of Namespaces: 256 00:10:05.629 Max Number of I/O Queues: 64 00:10:05.629 NVMe Specification Version (VS): 1.4 00:10:05.629 NVMe Specification Version (Identify): 1.4 00:10:05.629 Maximum Queue Entries: 2048 00:10:05.629 Contiguous Queues Required: Yes 00:10:05.629 Arbitration Mechanisms Supported 00:10:05.629 Weighted Round Robin: Not Supported 00:10:05.629 Vendor Specific: Not Supported 00:10:05.629 Reset Timeout: 7500 ms 00:10:05.629 Doorbell Stride: 4 bytes 00:10:05.629 NVM Subsystem Reset: Not Supported 00:10:05.629 Command Sets Supported 00:10:05.629 NVM Command Set: Supported 00:10:05.629 Boot Partition: Not Supported 00:10:05.629 Memory Page Size Minimum: 4096 bytes 00:10:05.629 Memory Page Size Maximum: 65536 bytes 00:10:05.629 Persistent Memory Region: Not Supported 00:10:05.629 Optional Asynchronous Events Supported 00:10:05.630 Namespace Attribute Notices: Supported 00:10:05.630 Firmware Activation Notices: Not Supported 00:10:05.630 ANA Change Notices: Not Supported 00:10:05.630 PLE Aggregate Log Change Notices: Not Supported 00:10:05.630 LBA Status Info Alert Notices: Not Supported 00:10:05.630 EGE Aggregate Log Change Notices: Not Supported 00:10:05.630 Normal NVM Subsystem Shutdown event: Not Supported 00:10:05.630 Zone Descriptor Change Notices: Not Supported 00:10:05.630 Discovery Log Change Notices: Not Supported 00:10:05.630 Controller Attributes 00:10:05.630 128-bit Host Identifier: Not Supported 00:10:05.630 Non-Operational Permissive Mode: Not Supported 00:10:05.630 NVM Sets: Not Supported 00:10:05.630 Read Recovery Levels: Not Supported 00:10:05.630 Endurance Groups: Supported 00:10:05.630 Predictable Latency Mode: Not Supported 00:10:05.630 Traffic Based Keep ALive: Not Supported 00:10:05.630 Namespace Granularity: Not Supported 00:10:05.630 SQ Associations: Not Supported 00:10:05.630 UUID List: Not Supported 00:10:05.630 Multi-Domain Subsystem: Not Supported 00:10:05.630 Fixed Capacity Management: Not Supported 00:10:05.630 Variable Capacity Management: Not Supported 00:10:05.630 Delete Endurance Group: Not Supported 00:10:05.630 Delete NVM Set: Not Supported 00:10:05.630 Extended LBA Formats Supported: Supported 00:10:05.630 Flexible Data Placement Supported: Supported 00:10:05.630 00:10:05.630 Controller Memory Buffer Support 00:10:05.630 ================================ 00:10:05.630 Supported: No 00:10:05.630 00:10:05.630 Persistent Memory Region Support 00:10:05.630 ================================ 00:10:05.630 Supported: No 00:10:05.630 00:10:05.630 Admin Command Set Attributes 00:10:05.630 ============================ 00:10:05.630 Security Send/Receive: Not Supported 00:10:05.630 Format NVM: Supported 00:10:05.630 Firmware Activate/Download: Not Supported 00:10:05.630 Namespace Management: Supported 00:10:05.630 Device Self-Test: Not Supported 00:10:05.630 Directives: Supported 00:10:05.630 NVMe-MI: Not Supported 00:10:05.630 Virtualization Management: Not Supported 00:10:05.630 Doorbell Buffer Config: Supported 00:10:05.630 Get LBA Status Capability: Not Supported 00:10:05.630 Command & Feature Lockdown Capability: Not Supported 00:10:05.630 Abort Command Limit: 4 00:10:05.630 Async Event Request Limit: 4 00:10:05.630 Number of Firmware Slots: N/A 00:10:05.630 Firmware Slot 1 Read-Only: N/A 00:10:05.630 Firmware Activation Without Reset: N/A 00:10:05.630 Multiple Update Detection Support: N/A 00:10:05.630 Firmware Update Granularity: No Information Provided 00:10:05.630 Per-Namespace SMART Log: Yes 00:10:05.630 Asymmetric Namespace Access Log Page: Not Supported 00:10:05.630 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:05.630 Command Effects Log Page: Supported 00:10:05.630 Get Log Page Extended Data: Supported 00:10:05.630 Telemetry Log Pages: Not Supported 00:10:05.630 Persistent Event Log Pages: Not Supported 00:10:05.630 Supported Log Pages Log Page: May Support 00:10:05.630 Commands Supported & Effects Log Page: Not Supported 00:10:05.630 Feature Identifiers & Effects Log Page:May Support 00:10:05.630 NVMe-MI Commands & Effects Log Page: May Support 00:10:05.630 Data Area 4 for Telemetry Log: Not Supported 00:10:05.630 Error Log Page Entries Supported: 1 00:10:05.630 Keep Alive: Not Supported 00:10:05.630 00:10:05.630 NVM Command Set Attributes 00:10:05.630 ========================== 00:10:05.630 Submission Queue Entry Size 00:10:05.630 Max: 64 00:10:05.630 Min: 64 00:10:05.630 Completion Queue Entry Size 00:10:05.630 Max: 16 00:10:05.630 Min: 16 00:10:05.630 Number of Namespaces: 256 00:10:05.630 Compare Command: Supported 00:10:05.630 Write Uncorrectable Command: Not Supported 00:10:05.630 Dataset Management Command: Supported 00:10:05.630 Write Zeroes Command: Supported 00:10:05.630 Set Features Save Field: Supported 00:10:05.630 Reservations: Not Supported 00:10:05.630 Timestamp: Supported 00:10:05.630 Copy: Supported 00:10:05.630 Volatile Write Cache: Present 00:10:05.630 Atomic Write Unit (Normal): 1 00:10:05.630 Atomic Write Unit (PFail): 1 00:10:05.630 Atomic Compare & Write Unit: 1 00:10:05.630 Fused Compare & Write: Not Supported 00:10:05.630 Scatter-Gather List 00:10:05.630 SGL Command Set: Supported 00:10:05.630 SGL Keyed: Not Supported 00:10:05.630 SGL Bit Bucket Descriptor: Not Supported 00:10:05.630 SGL Metadata Pointer: Not Supported 00:10:05.630 Oversized SGL: Not Supported 00:10:05.630 SGL Metadata Address: Not Supported 00:10:05.630 SGL Offset: Not Supported 00:10:05.630 Transport SGL Data Block: Not Supported 00:10:05.630 Replay Protected Memory Block: Not Supported 00:10:05.630 00:10:05.630 Firmware Slot Information 00:10:05.630 ========================= 00:10:05.630 Active slot: 1 00:10:05.630 Slot 1 Firmware Revision: 1.0 00:10:05.630 00:10:05.630 00:10:05.630 Commands Supported and Effects 00:10:05.630 ============================== 00:10:05.630 Admin Commands 00:10:05.630 -------------- 00:10:05.630 Delete I/O Submission Queue (00h): Supported 00:10:05.630 Create I/O Submission Queue (01h): Supported 00:10:05.630 Get Log Page (02h): Supported 00:10:05.630 Delete I/O Completion Queue (04h): Supported 00:10:05.630 Create I/O Completion Queue (05h): Supported 00:10:05.630 Identify (06h): Supported 00:10:05.630 Abort (08h): Supported 00:10:05.630 Set Features (09h): Supported 00:10:05.630 Get Features (0Ah): Supported 00:10:05.630 Asynchronous Event Request (0Ch): Supported 00:10:05.630 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:05.630 Directive Send (19h): Supported 00:10:05.630 Directive Receive (1Ah): Supported 00:10:05.630 Virtualization Management (1Ch): Supported 00:10:05.630 Doorbell Buffer Config (7Ch): Supported 00:10:05.630 Format NVM (80h): Supported LBA-Change 00:10:05.630 I/O Commands 00:10:05.630 ------------ 00:10:05.630 Flush (00h): Supported LBA-Change 00:10:05.630 Write (01h): Supported LBA-Change 00:10:05.630 Read (02h): Supported 00:10:05.630 Compare (05h): Supported 00:10:05.630 Write Zeroes (08h): Supported LBA-Change 00:10:05.630 Dataset Management (09h): Supported LBA-Change 00:10:05.630 Unknown (0Ch): Supported 00:10:05.630 Unknown (12h): Supported 00:10:05.630 Copy (19h): Supported LBA-Change 00:10:05.630 Unknown (1Dh): Supported LBA-Change 00:10:05.630 00:10:05.630 Error Log 00:10:05.630 ========= 00:10:05.630 00:10:05.630 Arbitration 00:10:05.630 =========== 00:10:05.630 Arbitration Burst: no limit 00:10:05.630 00:10:05.630 Power Management 00:10:05.630 ================ 00:10:05.630 Number of Power States: 1 00:10:05.630 Current Power State: Power State #0 00:10:05.630 Power State #0: 00:10:05.630 Max Power: 25.00 W 00:10:05.630 Non-Operational State: Operational 00:10:05.630 Entry Latency: 16 microseconds 00:10:05.630 Exit Latency: 4 microseconds 00:10:05.630 Relative Read Throughput: 0 00:10:05.630 Relative Read Latency: 0 00:10:05.630 Relative Write Throughput: 0 00:10:05.630 Relative Write Latency: 0 00:10:05.630 Idle Power: Not Reported 00:10:05.630 Active Power: Not Reported 00:10:05.630 Non-Operational Permissive Mode: Not Supported 00:10:05.630 00:10:05.630 Health Information 00:10:05.630 ================== 00:10:05.630 Critical Warnings: 00:10:05.630 Available Spare Space: OK 00:10:05.630 Temperature: OK 00:10:05.630 Device Reliability: OK 00:10:05.630 Read Only: No 00:10:05.630 Volatile Memory Backup: OK 00:10:05.630 Current Temperature: 323 Kelvin (50 Celsius) 00:10:05.630 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:05.630 Available Spare: 0% 00:10:05.630 Available Spare Threshold: 0% 00:10:05.630 Life Percentage Used: 0% 00:10:05.630 Data Units Read: 902 00:10:05.630 Data Units Written: 831 00:10:05.630 Host Read Commands: 39342 00:10:05.630 Host Write Commands: 38765 00:10:05.630 Controller Busy Time: 0 minutes 00:10:05.630 Power Cycles: 0 00:10:05.630 Power On Hours: 0 hours 00:10:05.630 Unsafe Shutdowns: 0 00:10:05.630 Unrecoverable Media Errors: 0 00:10:05.630 Lifetime Error Log Entries: 0 00:10:05.630 Warning Temperature Time: 0 minutes 00:10:05.630 Critical Temperature Time: 0 minutes 00:10:05.630 00:10:05.630 Number of Queues 00:10:05.630 ================ 00:10:05.630 Number of I/O Submission Queues: 64 00:10:05.630 Number of I/O Completion Queues: 64 00:10:05.630 00:10:05.630 ZNS Specific Controller Data 00:10:05.630 ============================ 00:10:05.630 Zone Append Size Limit: 0 00:10:05.630 00:10:05.630 00:10:05.630 Active Namespaces 00:10:05.630 ================= 00:10:05.630 Namespace ID:1 00:10:05.630 Error Recovery Timeout: Unlimited 00:10:05.630 Command Set Identifier: NVM (00h) 00:10:05.630 Deallocate: Supported 00:10:05.630 Deallocated/Unwritten Error: Supported 00:10:05.630 Deallocated Read Value: All 0x00 00:10:05.630 Deallocate in Write Zeroes: Not Supported 00:10:05.630 Deallocated Guard Field: 0xFFFF 00:10:05.631 Flush: Supported 00:10:05.631 Reservation: Not Supported 00:10:05.631 Namespace Sharing Capabilities: Multiple Controllers 00:10:05.631 Size (in LBAs): 262144 (1GiB) 00:10:05.631 Capacity (in LBAs): 262144 (1GiB) 00:10:05.631 Utilization (in LBAs): 262144 (1GiB) 00:10:05.631 Thin Provisioning: Not Supported 00:10:05.631 Per-NS Atomic Units: No 00:10:05.631 Maximum Single Source Range Length: 128 00:10:05.631 Maximum Copy Length: 128 00:10:05.631 Maximum Source Range Count: 128 00:10:05.631 NGUID/EUI64 Never Reused: No 00:10:05.631 Namespace Write Protected: No 00:10:05.631 Endurance group ID: 1 00:10:05.631 Number of LBA Formats: 8 00:10:05.631 Current LBA Format: LBA Format #04 00:10:05.631 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:05.631 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:05.631 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:05.631 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:05.631 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:05.631 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:05.631 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:05.631 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:05.631 00:10:05.631 Get Feature FDP: 00:10:05.631 ================ 00:10:05.631 Enabled: Yes 00:10:05.631 FDP configuration index: 0 00:10:05.631 00:10:05.631 FDP configurations log page 00:10:05.631 =========================== 00:10:05.631 Number of FDP configurations: 1 00:10:05.631 Version: 0 00:10:05.631 Size: 112 00:10:05.631 FDP Configuration Descriptor: 0 00:10:05.631 Descriptor Size: 96 00:10:05.631 Reclaim Group Identifier format: 2 00:10:05.631 FDP Volatile Write Cache: Not Present 00:10:05.631 FDP Configuration: Valid 00:10:05.631 Vendor Specific Size: 0 00:10:05.631 Number of Reclaim Groups: 2 00:10:05.631 Number of Recalim Unit Handles: 8 00:10:05.631 Max Placement Identifiers: 128 00:10:05.631 Number of Namespaces Suppprted: 256 00:10:05.631 Reclaim unit Nominal Size: 6000000 bytes 00:10:05.631 Estimated Reclaim Unit Time Limit: Not Reported 00:10:05.631 RUH Desc #000: RUH Type: Initially Isolated 00:10:05.631 RUH Desc #001: RUH Type: Initially Isolated 00:10:05.631 RUH Desc #002: RUH Type: Initially Isolated 00:10:05.631 RUH Desc #003: RUH Type: Initially Isolated 00:10:05.631 RUH Desc #004: RUH Type: Initially Isolated 00:10:05.631 RUH Desc #005: RUH Type: Initially Isolated 00:10:05.631 RUH Desc #006: RUH Type: Initially Isolated 00:10:05.631 RUH Desc #007: RUH Type: Initially Isolated 00:10:05.631 00:10:05.631 FDP reclaim unit handle usage log page 00:10:05.631 ====================================== 00:10:05.631 Number of Reclaim Unit Handles: 8 00:10:05.631 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:05.631 RUH Usage Desc #001: RUH Attributes: Unused 00:10:05.631 RUH Usage Desc #002: RUH Attributes: Unused 00:10:05.631 RUH Usage Desc #003: RUH Attributes: Unused 00:10:05.631 RUH Usage Desc #004: RUH Attributes: Unused 00:10:05.631 RUH Usage Desc #005: RUH Attributes: Unused 00:10:05.631 RUH Usage Desc #006: RUH Attributes: Unused 00:10:05.631 RUH Usage Desc #007: RUH Attributes: Unused 00:10:05.631 00:10:05.631 FDP statistics log page 00:10:05.631 ======================= 00:10:05.631 Host bytes with metadata written: 536518656 00:10:05.631 Media bytes with metadata written: 536576000 00:10:05.631 Media bytes erased: 0 00:10:05.631 00:10:05.631 FDP events log page 00:10:05.631 =================== 00:10:05.631 Number of FDP events: 0 00:10:05.631 00:10:05.631 NVM Specific Namespace Data 00:10:05.631 =========================== 00:10:05.631 Logical Block Storage Tag Mask: 0 00:10:05.631 Protection Information Capabilities: 00:10:05.631 16b Guard Protection Information Storage Tag Support: No 00:10:05.631 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:05.631 Storage Tag Check Read Support: No 00:10:05.631 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.631 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.631 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.631 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.631 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.631 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.631 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.631 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:05.631 00:10:05.631 real 0m1.699s 00:10:05.631 user 0m0.635s 00:10:05.631 sys 0m0.878s 00:10:05.631 17:59:34 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.631 17:59:34 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:05.631 ************************************ 00:10:05.631 END TEST nvme_identify 00:10:05.631 ************************************ 00:10:05.631 17:59:34 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:05.631 17:59:34 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:05.631 17:59:34 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:05.631 17:59:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:05.631 ************************************ 00:10:05.631 START TEST nvme_perf 00:10:05.631 ************************************ 00:10:05.631 17:59:34 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:10:05.631 17:59:34 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:07.053 Initializing NVMe Controllers 00:10:07.053 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:07.053 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:07.053 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:07.053 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:07.053 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:07.053 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:07.053 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:07.053 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:07.053 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:07.053 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:07.053 Initialization complete. Launching workers. 00:10:07.053 ======================================================== 00:10:07.053 Latency(us) 00:10:07.053 Device Information : IOPS MiB/s Average min max 00:10:07.053 PCIE (0000:00:10.0) NSID 1 from core 0: 13853.97 162.35 9258.88 7789.18 48234.09 00:10:07.053 PCIE (0000:00:11.0) NSID 1 from core 0: 13853.97 162.35 9245.06 7836.27 46654.78 00:10:07.053 PCIE (0000:00:13.0) NSID 1 from core 0: 13853.97 162.35 9229.72 7884.26 45400.61 00:10:07.053 PCIE (0000:00:12.0) NSID 1 from core 0: 13853.97 162.35 9214.55 7878.53 43814.34 00:10:07.053 PCIE (0000:00:12.0) NSID 2 from core 0: 13853.97 162.35 9199.00 7870.33 42092.93 00:10:07.053 PCIE (0000:00:12.0) NSID 3 from core 0: 13853.97 162.35 9183.55 7882.80 40424.84 00:10:07.053 ======================================================== 00:10:07.053 Total : 83123.85 974.11 9221.79 7789.18 48234.09 00:10:07.053 00:10:07.053 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:07.053 ================================================================================= 00:10:07.053 1.00000% : 8001.182us 00:10:07.053 10.00000% : 8211.740us 00:10:07.053 25.00000% : 8422.297us 00:10:07.053 50.00000% : 8738.133us 00:10:07.053 75.00000% : 9053.969us 00:10:07.053 90.00000% : 10317.314us 00:10:07.053 95.00000% : 11422.741us 00:10:07.053 98.00000% : 12844.003us 00:10:07.053 99.00000% : 15265.414us 00:10:07.053 99.50000% : 38953.124us 00:10:07.053 99.90000% : 47796.537us 00:10:07.053 99.99000% : 48217.651us 00:10:07.053 99.99900% : 48428.209us 00:10:07.053 99.99990% : 48428.209us 00:10:07.053 99.99999% : 48428.209us 00:10:07.053 00:10:07.053 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:07.053 ================================================================================= 00:10:07.053 1.00000% : 8106.461us 00:10:07.053 10.00000% : 8264.379us 00:10:07.053 25.00000% : 8474.937us 00:10:07.053 50.00000% : 8738.133us 00:10:07.053 75.00000% : 9001.330us 00:10:07.053 90.00000% : 10317.314us 00:10:07.053 95.00000% : 11422.741us 00:10:07.053 98.00000% : 12896.643us 00:10:07.053 99.00000% : 15370.692us 00:10:07.053 99.50000% : 38110.895us 00:10:07.053 99.90000% : 46322.635us 00:10:07.053 99.99000% : 46743.749us 00:10:07.053 99.99900% : 46743.749us 00:10:07.053 99.99990% : 46743.749us 00:10:07.053 99.99999% : 46743.749us 00:10:07.053 00:10:07.053 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:07.053 ================================================================================= 00:10:07.053 1.00000% : 8106.461us 00:10:07.053 10.00000% : 8264.379us 00:10:07.053 25.00000% : 8474.937us 00:10:07.053 50.00000% : 8738.133us 00:10:07.053 75.00000% : 9001.330us 00:10:07.053 90.00000% : 10264.675us 00:10:07.053 95.00000% : 11370.101us 00:10:07.053 98.00000% : 12686.085us 00:10:07.053 99.00000% : 15160.135us 00:10:07.053 99.50000% : 37268.665us 00:10:07.053 99.90000% : 45059.290us 00:10:07.053 99.99000% : 45480.405us 00:10:07.053 99.99900% : 45480.405us 00:10:07.053 99.99990% : 45480.405us 00:10:07.053 99.99999% : 45480.405us 00:10:07.053 00:10:07.053 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:07.053 ================================================================================= 00:10:07.053 1.00000% : 8106.461us 00:10:07.053 10.00000% : 8264.379us 00:10:07.053 25.00000% : 8474.937us 00:10:07.053 50.00000% : 8738.133us 00:10:07.053 75.00000% : 9001.330us 00:10:07.053 90.00000% : 10264.675us 00:10:07.053 95.00000% : 11317.462us 00:10:07.053 98.00000% : 12791.364us 00:10:07.054 99.00000% : 14633.741us 00:10:07.054 99.50000% : 35584.206us 00:10:07.054 99.90000% : 43374.831us 00:10:07.054 99.99000% : 43795.945us 00:10:07.054 99.99900% : 44006.503us 00:10:07.054 99.99990% : 44006.503us 00:10:07.054 99.99999% : 44006.503us 00:10:07.054 00:10:07.054 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:07.054 ================================================================================= 00:10:07.054 1.00000% : 8106.461us 00:10:07.054 10.00000% : 8264.379us 00:10:07.054 25.00000% : 8474.937us 00:10:07.054 50.00000% : 8738.133us 00:10:07.054 75.00000% : 9001.330us 00:10:07.054 90.00000% : 10264.675us 00:10:07.054 95.00000% : 11317.462us 00:10:07.054 98.00000% : 12580.806us 00:10:07.054 99.00000% : 14633.741us 00:10:07.054 99.50000% : 33899.746us 00:10:07.054 99.90000% : 41690.371us 00:10:07.054 99.99000% : 42111.486us 00:10:07.054 99.99900% : 42111.486us 00:10:07.054 99.99990% : 42111.486us 00:10:07.054 99.99999% : 42111.486us 00:10:07.054 00:10:07.054 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:07.054 ================================================================================= 00:10:07.054 1.00000% : 8106.461us 00:10:07.054 10.00000% : 8264.379us 00:10:07.054 25.00000% : 8474.937us 00:10:07.054 50.00000% : 8738.133us 00:10:07.054 75.00000% : 9001.330us 00:10:07.054 90.00000% : 10264.675us 00:10:07.054 95.00000% : 11370.101us 00:10:07.054 98.00000% : 12370.249us 00:10:07.054 99.00000% : 14739.020us 00:10:07.054 99.50000% : 32425.844us 00:10:07.054 99.90000% : 40005.912us 00:10:07.054 99.99000% : 40427.027us 00:10:07.054 99.99900% : 40427.027us 00:10:07.054 99.99990% : 40427.027us 00:10:07.054 99.99999% : 40427.027us 00:10:07.054 00:10:07.054 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:07.054 ============================================================================== 00:10:07.054 Range in us Cumulative IO count 00:10:07.054 7737.986 - 7790.625: 0.0072% ( 1) 00:10:07.054 7790.625 - 7843.264: 0.0288% ( 3) 00:10:07.054 7843.264 - 7895.904: 0.1512% ( 17) 00:10:07.054 7895.904 - 7948.543: 0.5184% ( 51) 00:10:07.054 7948.543 - 8001.182: 1.2241% ( 98) 00:10:07.054 8001.182 - 8053.822: 2.7146% ( 207) 00:10:07.054 8053.822 - 8106.461: 5.0259% ( 321) 00:10:07.054 8106.461 - 8159.100: 7.8485% ( 392) 00:10:07.054 8159.100 - 8211.740: 11.0023% ( 438) 00:10:07.054 8211.740 - 8264.379: 14.5377% ( 491) 00:10:07.054 8264.379 - 8317.018: 18.0948% ( 494) 00:10:07.054 8317.018 - 8369.658: 21.8102% ( 516) 00:10:07.054 8369.658 - 8422.297: 25.7921% ( 553) 00:10:07.054 8422.297 - 8474.937: 29.9107% ( 572) 00:10:07.054 8474.937 - 8527.576: 34.1734% ( 592) 00:10:07.054 8527.576 - 8580.215: 38.4793% ( 598) 00:10:07.054 8580.215 - 8632.855: 42.7707% ( 596) 00:10:07.054 8632.855 - 8685.494: 47.1558% ( 609) 00:10:07.054 8685.494 - 8738.133: 51.4473% ( 596) 00:10:07.054 8738.133 - 8790.773: 55.9116% ( 620) 00:10:07.054 8790.773 - 8843.412: 60.3615% ( 618) 00:10:07.054 8843.412 - 8896.051: 64.8762% ( 627) 00:10:07.054 8896.051 - 8948.691: 68.9948% ( 572) 00:10:07.054 8948.691 - 9001.330: 72.6959% ( 514) 00:10:07.054 9001.330 - 9053.969: 75.9289% ( 449) 00:10:07.054 9053.969 - 9106.609: 78.6002% ( 371) 00:10:07.054 9106.609 - 9159.248: 80.5516% ( 271) 00:10:07.054 9159.248 - 9211.888: 82.1789% ( 226) 00:10:07.054 9211.888 - 9264.527: 83.4461% ( 176) 00:10:07.054 9264.527 - 9317.166: 84.4038% ( 133) 00:10:07.054 9317.166 - 9369.806: 85.2463% ( 117) 00:10:07.054 9369.806 - 9422.445: 85.8583% ( 85) 00:10:07.054 9422.445 - 9475.084: 86.3911% ( 74) 00:10:07.054 9475.084 - 9527.724: 86.8232% ( 60) 00:10:07.054 9527.724 - 9580.363: 87.1472% ( 45) 00:10:07.054 9580.363 - 9633.002: 87.4496% ( 42) 00:10:07.054 9633.002 - 9685.642: 87.7952% ( 48) 00:10:07.054 9685.642 - 9738.281: 88.0760% ( 39) 00:10:07.054 9738.281 - 9790.920: 88.3569% ( 39) 00:10:07.054 9790.920 - 9843.560: 88.6233% ( 37) 00:10:07.054 9843.560 - 9896.199: 88.8609% ( 33) 00:10:07.054 9896.199 - 9948.839: 89.0913% ( 32) 00:10:07.054 9948.839 - 10001.478: 89.2713% ( 25) 00:10:07.054 10001.478 - 10054.117: 89.4225% ( 21) 00:10:07.054 10054.117 - 10106.757: 89.6169% ( 27) 00:10:07.054 10106.757 - 10159.396: 89.7321% ( 16) 00:10:07.054 10159.396 - 10212.035: 89.8401% ( 15) 00:10:07.054 10212.035 - 10264.675: 89.9410% ( 14) 00:10:07.054 10264.675 - 10317.314: 90.0418% ( 14) 00:10:07.054 10317.314 - 10369.953: 90.1642% ( 17) 00:10:07.054 10369.953 - 10422.593: 90.2722% ( 15) 00:10:07.054 10422.593 - 10475.232: 90.3874% ( 16) 00:10:07.054 10475.232 - 10527.871: 90.5530% ( 23) 00:10:07.054 10527.871 - 10580.511: 90.7330% ( 25) 00:10:07.054 10580.511 - 10633.150: 90.9562% ( 31) 00:10:07.054 10633.150 - 10685.790: 91.1578% ( 28) 00:10:07.054 10685.790 - 10738.429: 91.4315% ( 38) 00:10:07.054 10738.429 - 10791.068: 91.7267% ( 41) 00:10:07.054 10791.068 - 10843.708: 92.0507% ( 45) 00:10:07.054 10843.708 - 10896.347: 92.3747% ( 45) 00:10:07.054 10896.347 - 10948.986: 92.6411% ( 37) 00:10:07.054 10948.986 - 11001.626: 92.9363% ( 41) 00:10:07.054 11001.626 - 11054.265: 93.2172% ( 39) 00:10:07.054 11054.265 - 11106.904: 93.5052% ( 40) 00:10:07.054 11106.904 - 11159.544: 93.8220% ( 44) 00:10:07.054 11159.544 - 11212.183: 94.0956% ( 38) 00:10:07.054 11212.183 - 11264.822: 94.3620% ( 37) 00:10:07.054 11264.822 - 11317.462: 94.6861% ( 45) 00:10:07.054 11317.462 - 11370.101: 94.9381% ( 35) 00:10:07.054 11370.101 - 11422.741: 95.1973% ( 36) 00:10:07.054 11422.741 - 11475.380: 95.5141% ( 44) 00:10:07.054 11475.380 - 11528.019: 95.7877% ( 38) 00:10:07.054 11528.019 - 11580.659: 96.0613% ( 38) 00:10:07.054 11580.659 - 11633.298: 96.3422% ( 39) 00:10:07.054 11633.298 - 11685.937: 96.5726% ( 32) 00:10:07.054 11685.937 - 11738.577: 96.8462% ( 38) 00:10:07.054 11738.577 - 11791.216: 97.0478% ( 28) 00:10:07.054 11791.216 - 11843.855: 97.2062% ( 22) 00:10:07.054 11843.855 - 11896.495: 97.3142% ( 15) 00:10:07.054 11896.495 - 11949.134: 97.4294% ( 16) 00:10:07.054 11949.134 - 12001.773: 97.5446% ( 16) 00:10:07.054 12001.773 - 12054.413: 97.5878% ( 6) 00:10:07.054 12054.413 - 12107.052: 97.6166% ( 4) 00:10:07.054 12107.052 - 12159.692: 97.6526% ( 5) 00:10:07.054 12159.692 - 12212.331: 97.7031% ( 7) 00:10:07.054 12212.331 - 12264.970: 97.7175% ( 2) 00:10:07.054 12264.970 - 12317.610: 97.7463% ( 4) 00:10:07.054 12317.610 - 12370.249: 97.7751% ( 4) 00:10:07.054 12370.249 - 12422.888: 97.8327% ( 8) 00:10:07.054 12422.888 - 12475.528: 97.8615% ( 4) 00:10:07.054 12475.528 - 12528.167: 97.8975% ( 5) 00:10:07.054 12528.167 - 12580.806: 97.9263% ( 4) 00:10:07.054 12580.806 - 12633.446: 97.9551% ( 4) 00:10:07.054 12633.446 - 12686.085: 97.9767% ( 3) 00:10:07.054 12686.085 - 12738.724: 97.9911% ( 2) 00:10:07.054 12738.724 - 12791.364: 97.9983% ( 1) 00:10:07.054 12791.364 - 12844.003: 98.0055% ( 1) 00:10:07.054 12844.003 - 12896.643: 98.0199% ( 2) 00:10:07.054 12896.643 - 12949.282: 98.0343% ( 2) 00:10:07.054 12949.282 - 13001.921: 98.0415% ( 1) 00:10:07.054 13001.921 - 13054.561: 98.0487% ( 1) 00:10:07.054 13054.561 - 13107.200: 98.0631% ( 2) 00:10:07.054 13107.200 - 13159.839: 98.0703% ( 1) 00:10:07.054 13159.839 - 13212.479: 98.0775% ( 1) 00:10:07.054 13212.479 - 13265.118: 98.0991% ( 3) 00:10:07.054 13265.118 - 13317.757: 98.1063% ( 1) 00:10:07.054 13317.757 - 13370.397: 98.1135% ( 1) 00:10:07.054 13370.397 - 13423.036: 98.1279% ( 2) 00:10:07.054 13423.036 - 13475.676: 98.1567% ( 4) 00:10:07.054 13475.676 - 13580.954: 98.1927% ( 5) 00:10:07.054 13580.954 - 13686.233: 98.2287% ( 5) 00:10:07.054 13686.233 - 13791.512: 98.2503% ( 3) 00:10:07.054 13791.512 - 13896.790: 98.2863% ( 5) 00:10:07.054 13896.790 - 14002.069: 98.3367% ( 7) 00:10:07.054 14002.069 - 14107.348: 98.3943% ( 8) 00:10:07.054 14107.348 - 14212.627: 98.4447% ( 7) 00:10:07.054 14212.627 - 14317.905: 98.5095% ( 9) 00:10:07.054 14317.905 - 14423.184: 98.5671% ( 8) 00:10:07.054 14423.184 - 14528.463: 98.6175% ( 7) 00:10:07.055 14528.463 - 14633.741: 98.6823% ( 9) 00:10:07.055 14633.741 - 14739.020: 98.7471% ( 9) 00:10:07.055 14739.020 - 14844.299: 98.8119% ( 9) 00:10:07.055 14844.299 - 14949.578: 98.8551% ( 6) 00:10:07.055 14949.578 - 15054.856: 98.9271% ( 10) 00:10:07.055 15054.856 - 15160.135: 98.9847% ( 8) 00:10:07.055 15160.135 - 15265.414: 99.0351% ( 7) 00:10:07.055 15265.414 - 15370.692: 99.0495% ( 2) 00:10:07.055 15370.692 - 15475.971: 99.0783% ( 4) 00:10:07.055 36636.993 - 36847.550: 99.0855% ( 1) 00:10:07.055 36847.550 - 37058.108: 99.1215% ( 5) 00:10:07.055 37058.108 - 37268.665: 99.1719% ( 7) 00:10:07.055 37268.665 - 37479.222: 99.2079% ( 5) 00:10:07.055 37479.222 - 37689.780: 99.2512% ( 6) 00:10:07.055 37689.780 - 37900.337: 99.3016% ( 7) 00:10:07.055 37900.337 - 38110.895: 99.3448% ( 6) 00:10:07.055 38110.895 - 38321.452: 99.3880% ( 6) 00:10:07.055 38321.452 - 38532.010: 99.4168% ( 4) 00:10:07.055 38532.010 - 38742.567: 99.4672% ( 7) 00:10:07.055 38742.567 - 38953.124: 99.5176% ( 7) 00:10:07.055 38953.124 - 39163.682: 99.5392% ( 3) 00:10:07.055 45690.962 - 45901.520: 99.5608% ( 3) 00:10:07.055 45901.520 - 46112.077: 99.5968% ( 5) 00:10:07.055 46112.077 - 46322.635: 99.6400% ( 6) 00:10:07.055 46322.635 - 46533.192: 99.6832% ( 6) 00:10:07.055 46533.192 - 46743.749: 99.7264% ( 6) 00:10:07.055 46743.749 - 46954.307: 99.7696% ( 6) 00:10:07.055 46954.307 - 47164.864: 99.8056% ( 5) 00:10:07.055 47164.864 - 47375.422: 99.8416% ( 5) 00:10:07.055 47375.422 - 47585.979: 99.8848% ( 6) 00:10:07.055 47585.979 - 47796.537: 99.9280% ( 6) 00:10:07.055 47796.537 - 48007.094: 99.9568% ( 4) 00:10:07.055 48007.094 - 48217.651: 99.9928% ( 5) 00:10:07.055 48217.651 - 48428.209: 100.0000% ( 1) 00:10:07.055 00:10:07.055 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:07.055 ============================================================================== 00:10:07.055 Range in us Cumulative IO count 00:10:07.055 7790.625 - 7843.264: 0.0144% ( 2) 00:10:07.055 7843.264 - 7895.904: 0.0432% ( 4) 00:10:07.055 7895.904 - 7948.543: 0.1440% ( 14) 00:10:07.055 7948.543 - 8001.182: 0.3672% ( 31) 00:10:07.055 8001.182 - 8053.822: 0.8713% ( 70) 00:10:07.055 8053.822 - 8106.461: 2.2321% ( 189) 00:10:07.055 8106.461 - 8159.100: 4.3131% ( 289) 00:10:07.055 8159.100 - 8211.740: 7.1645% ( 396) 00:10:07.055 8211.740 - 8264.379: 10.5847% ( 475) 00:10:07.055 8264.379 - 8317.018: 14.6313% ( 562) 00:10:07.055 8317.018 - 8369.658: 18.9588% ( 601) 00:10:07.055 8369.658 - 8422.297: 23.5095% ( 632) 00:10:07.055 8422.297 - 8474.937: 28.2402% ( 657) 00:10:07.055 8474.937 - 8527.576: 33.1365% ( 680) 00:10:07.055 8527.576 - 8580.215: 38.2056% ( 704) 00:10:07.055 8580.215 - 8632.855: 43.2892% ( 706) 00:10:07.055 8632.855 - 8685.494: 48.4663% ( 719) 00:10:07.055 8685.494 - 8738.133: 53.6146% ( 715) 00:10:07.055 8738.133 - 8790.773: 58.8206% ( 723) 00:10:07.055 8790.773 - 8843.412: 63.7169% ( 680) 00:10:07.055 8843.412 - 8896.051: 68.4116% ( 652) 00:10:07.055 8896.051 - 8948.691: 72.4942% ( 567) 00:10:07.055 8948.691 - 9001.330: 75.7344% ( 450) 00:10:07.055 9001.330 - 9053.969: 78.2690% ( 352) 00:10:07.055 9053.969 - 9106.609: 80.2995% ( 282) 00:10:07.055 9106.609 - 9159.248: 81.9556% ( 230) 00:10:07.055 9159.248 - 9211.888: 83.2445% ( 179) 00:10:07.055 9211.888 - 9264.527: 84.3030% ( 147) 00:10:07.055 9264.527 - 9317.166: 85.1238% ( 114) 00:10:07.055 9317.166 - 9369.806: 85.8367% ( 99) 00:10:07.055 9369.806 - 9422.445: 86.3191% ( 67) 00:10:07.055 9422.445 - 9475.084: 86.7368% ( 58) 00:10:07.055 9475.084 - 9527.724: 87.0896% ( 49) 00:10:07.055 9527.724 - 9580.363: 87.4352% ( 48) 00:10:07.055 9580.363 - 9633.002: 87.7592% ( 45) 00:10:07.055 9633.002 - 9685.642: 87.9968% ( 33) 00:10:07.055 9685.642 - 9738.281: 88.2560% ( 36) 00:10:07.055 9738.281 - 9790.920: 88.5081% ( 35) 00:10:07.055 9790.920 - 9843.560: 88.7169% ( 29) 00:10:07.055 9843.560 - 9896.199: 88.9113% ( 27) 00:10:07.055 9896.199 - 9948.839: 89.0985% ( 26) 00:10:07.055 9948.839 - 10001.478: 89.2641% ( 23) 00:10:07.055 10001.478 - 10054.117: 89.4225% ( 22) 00:10:07.055 10054.117 - 10106.757: 89.5593% ( 19) 00:10:07.055 10106.757 - 10159.396: 89.7105% ( 21) 00:10:07.055 10159.396 - 10212.035: 89.8329% ( 17) 00:10:07.055 10212.035 - 10264.675: 89.9770% ( 20) 00:10:07.055 10264.675 - 10317.314: 90.1210% ( 20) 00:10:07.055 10317.314 - 10369.953: 90.2290% ( 15) 00:10:07.055 10369.953 - 10422.593: 90.3442% ( 16) 00:10:07.055 10422.593 - 10475.232: 90.4594% ( 16) 00:10:07.055 10475.232 - 10527.871: 90.5890% ( 18) 00:10:07.055 10527.871 - 10580.511: 90.7258% ( 19) 00:10:07.055 10580.511 - 10633.150: 90.8842% ( 22) 00:10:07.055 10633.150 - 10685.790: 91.0498% ( 23) 00:10:07.055 10685.790 - 10738.429: 91.2514% ( 28) 00:10:07.055 10738.429 - 10791.068: 91.4819% ( 32) 00:10:07.055 10791.068 - 10843.708: 91.7555% ( 38) 00:10:07.055 10843.708 - 10896.347: 92.0435% ( 40) 00:10:07.055 10896.347 - 10948.986: 92.3675% ( 45) 00:10:07.055 10948.986 - 11001.626: 92.6771% ( 43) 00:10:07.055 11001.626 - 11054.265: 92.9868% ( 43) 00:10:07.055 11054.265 - 11106.904: 93.3180% ( 46) 00:10:07.055 11106.904 - 11159.544: 93.6564% ( 47) 00:10:07.055 11159.544 - 11212.183: 93.9732% ( 44) 00:10:07.055 11212.183 - 11264.822: 94.2972% ( 45) 00:10:07.055 11264.822 - 11317.462: 94.6141% ( 44) 00:10:07.055 11317.462 - 11370.101: 94.9165% ( 42) 00:10:07.055 11370.101 - 11422.741: 95.2477% ( 46) 00:10:07.055 11422.741 - 11475.380: 95.5861% ( 47) 00:10:07.055 11475.380 - 11528.019: 95.9101% ( 45) 00:10:07.055 11528.019 - 11580.659: 96.2198% ( 43) 00:10:07.055 11580.659 - 11633.298: 96.5150% ( 41) 00:10:07.055 11633.298 - 11685.937: 96.7814% ( 37) 00:10:07.055 11685.937 - 11738.577: 96.9902% ( 29) 00:10:07.055 11738.577 - 11791.216: 97.1414% ( 21) 00:10:07.055 11791.216 - 11843.855: 97.2638% ( 17) 00:10:07.055 11843.855 - 11896.495: 97.3646% ( 14) 00:10:07.055 11896.495 - 11949.134: 97.4582% ( 13) 00:10:07.055 11949.134 - 12001.773: 97.5446% ( 12) 00:10:07.055 12001.773 - 12054.413: 97.5878% ( 6) 00:10:07.055 12054.413 - 12107.052: 97.6166% ( 4) 00:10:07.055 12107.052 - 12159.692: 97.6454% ( 4) 00:10:07.055 12159.692 - 12212.331: 97.6815% ( 5) 00:10:07.055 12212.331 - 12264.970: 97.7103% ( 4) 00:10:07.055 12264.970 - 12317.610: 97.7319% ( 3) 00:10:07.055 12317.610 - 12370.249: 97.7607% ( 4) 00:10:07.055 12370.249 - 12422.888: 97.7895% ( 4) 00:10:07.055 12422.888 - 12475.528: 97.8111% ( 3) 00:10:07.055 12475.528 - 12528.167: 97.8399% ( 4) 00:10:07.055 12528.167 - 12580.806: 97.8687% ( 4) 00:10:07.055 12580.806 - 12633.446: 97.8975% ( 4) 00:10:07.055 12633.446 - 12686.085: 97.9191% ( 3) 00:10:07.055 12686.085 - 12738.724: 97.9479% ( 4) 00:10:07.055 12738.724 - 12791.364: 97.9695% ( 3) 00:10:07.055 12791.364 - 12844.003: 97.9983% ( 4) 00:10:07.055 12844.003 - 12896.643: 98.0127% ( 2) 00:10:07.055 12896.643 - 12949.282: 98.0415% ( 4) 00:10:07.055 12949.282 - 13001.921: 98.0775% ( 5) 00:10:07.055 13001.921 - 13054.561: 98.1063% ( 4) 00:10:07.055 13054.561 - 13107.200: 98.1423% ( 5) 00:10:07.055 13107.200 - 13159.839: 98.1711% ( 4) 00:10:07.055 13159.839 - 13212.479: 98.1927% ( 3) 00:10:07.055 13212.479 - 13265.118: 98.2359% ( 6) 00:10:07.055 13265.118 - 13317.757: 98.2647% ( 4) 00:10:07.055 13317.757 - 13370.397: 98.3007% ( 5) 00:10:07.055 13370.397 - 13423.036: 98.3223% ( 3) 00:10:07.055 13423.036 - 13475.676: 98.3439% ( 3) 00:10:07.055 13475.676 - 13580.954: 98.3799% ( 5) 00:10:07.055 13580.954 - 13686.233: 98.4231% ( 6) 00:10:07.055 13686.233 - 13791.512: 98.4591% ( 5) 00:10:07.055 13791.512 - 13896.790: 98.4951% ( 5) 00:10:07.055 13896.790 - 14002.069: 98.5311% ( 5) 00:10:07.055 14002.069 - 14107.348: 98.5599% ( 4) 00:10:07.055 14107.348 - 14212.627: 98.5887% ( 4) 00:10:07.056 14212.627 - 14317.905: 98.6391% ( 7) 00:10:07.056 14317.905 - 14423.184: 98.6751% ( 5) 00:10:07.056 14423.184 - 14528.463: 98.7183% ( 6) 00:10:07.056 14528.463 - 14633.741: 98.7615% ( 6) 00:10:07.056 14633.741 - 14739.020: 98.8047% ( 6) 00:10:07.056 14739.020 - 14844.299: 98.8407% ( 5) 00:10:07.056 14844.299 - 14949.578: 98.8839% ( 6) 00:10:07.056 14949.578 - 15054.856: 98.9271% ( 6) 00:10:07.056 15054.856 - 15160.135: 98.9631% ( 5) 00:10:07.056 15160.135 - 15265.414: 98.9991% ( 5) 00:10:07.056 15265.414 - 15370.692: 99.0423% ( 6) 00:10:07.056 15370.692 - 15475.971: 99.0783% ( 5) 00:10:07.056 36005.320 - 36215.878: 99.1143% ( 5) 00:10:07.056 36215.878 - 36426.435: 99.1575% ( 6) 00:10:07.056 36426.435 - 36636.993: 99.2079% ( 7) 00:10:07.056 36636.993 - 36847.550: 99.2512% ( 6) 00:10:07.056 36847.550 - 37058.108: 99.3016% ( 7) 00:10:07.056 37058.108 - 37268.665: 99.3520% ( 7) 00:10:07.056 37268.665 - 37479.222: 99.3952% ( 6) 00:10:07.056 37479.222 - 37689.780: 99.4456% ( 7) 00:10:07.056 37689.780 - 37900.337: 99.4960% ( 7) 00:10:07.056 37900.337 - 38110.895: 99.5392% ( 6) 00:10:07.056 44217.060 - 44427.618: 99.5536% ( 2) 00:10:07.056 44427.618 - 44638.175: 99.5968% ( 6) 00:10:07.056 44638.175 - 44848.733: 99.6400% ( 6) 00:10:07.056 44848.733 - 45059.290: 99.6760% ( 5) 00:10:07.056 45059.290 - 45269.847: 99.7264% ( 7) 00:10:07.056 45269.847 - 45480.405: 99.7696% ( 6) 00:10:07.056 45480.405 - 45690.962: 99.8128% ( 6) 00:10:07.056 45690.962 - 45901.520: 99.8488% ( 5) 00:10:07.056 45901.520 - 46112.077: 99.8920% ( 6) 00:10:07.056 46112.077 - 46322.635: 99.9352% ( 6) 00:10:07.056 46322.635 - 46533.192: 99.9712% ( 5) 00:10:07.056 46533.192 - 46743.749: 100.0000% ( 4) 00:10:07.056 00:10:07.056 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:07.056 ============================================================================== 00:10:07.056 Range in us Cumulative IO count 00:10:07.056 7843.264 - 7895.904: 0.0360% ( 5) 00:10:07.056 7895.904 - 7948.543: 0.0864% ( 7) 00:10:07.056 7948.543 - 8001.182: 0.2592% ( 24) 00:10:07.056 8001.182 - 8053.822: 0.8209% ( 78) 00:10:07.056 8053.822 - 8106.461: 2.1385% ( 183) 00:10:07.056 8106.461 - 8159.100: 4.1403% ( 278) 00:10:07.056 8159.100 - 8211.740: 7.0204% ( 400) 00:10:07.056 8211.740 - 8264.379: 10.5415% ( 489) 00:10:07.056 8264.379 - 8317.018: 14.3865% ( 534) 00:10:07.056 8317.018 - 8369.658: 18.7860% ( 611) 00:10:07.056 8369.658 - 8422.297: 23.2719% ( 623) 00:10:07.056 8422.297 - 8474.937: 27.8802% ( 640) 00:10:07.056 8474.937 - 8527.576: 32.8557% ( 691) 00:10:07.056 8527.576 - 8580.215: 37.8744% ( 697) 00:10:07.056 8580.215 - 8632.855: 43.0012% ( 712) 00:10:07.056 8632.855 - 8685.494: 48.2071% ( 723) 00:10:07.056 8685.494 - 8738.133: 53.3122% ( 709) 00:10:07.056 8738.133 - 8790.773: 58.3237% ( 696) 00:10:07.056 8790.773 - 8843.412: 63.4361% ( 710) 00:10:07.056 8843.412 - 8896.051: 68.2748% ( 672) 00:10:07.056 8896.051 - 8948.691: 72.3502% ( 566) 00:10:07.056 8948.691 - 9001.330: 75.6696% ( 461) 00:10:07.056 9001.330 - 9053.969: 78.2114% ( 353) 00:10:07.056 9053.969 - 9106.609: 80.2995% ( 290) 00:10:07.056 9106.609 - 9159.248: 81.9700% ( 232) 00:10:07.056 9159.248 - 9211.888: 83.2661% ( 180) 00:10:07.056 9211.888 - 9264.527: 84.2886% ( 142) 00:10:07.056 9264.527 - 9317.166: 85.1959% ( 126) 00:10:07.056 9317.166 - 9369.806: 85.8943% ( 97) 00:10:07.056 9369.806 - 9422.445: 86.4055% ( 71) 00:10:07.056 9422.445 - 9475.084: 86.8448% ( 61) 00:10:07.056 9475.084 - 9527.724: 87.2336% ( 54) 00:10:07.056 9527.724 - 9580.363: 87.5720% ( 47) 00:10:07.056 9580.363 - 9633.002: 87.8888% ( 44) 00:10:07.056 9633.002 - 9685.642: 88.1552% ( 37) 00:10:07.056 9685.642 - 9738.281: 88.3641% ( 29) 00:10:07.056 9738.281 - 9790.920: 88.5585% ( 27) 00:10:07.056 9790.920 - 9843.560: 88.7817% ( 31) 00:10:07.056 9843.560 - 9896.199: 88.9689% ( 26) 00:10:07.056 9896.199 - 9948.839: 89.1921% ( 31) 00:10:07.056 9948.839 - 10001.478: 89.3217% ( 18) 00:10:07.056 10001.478 - 10054.117: 89.4585% ( 19) 00:10:07.056 10054.117 - 10106.757: 89.5881% ( 18) 00:10:07.056 10106.757 - 10159.396: 89.7393% ( 21) 00:10:07.056 10159.396 - 10212.035: 89.8906% ( 21) 00:10:07.056 10212.035 - 10264.675: 90.0490% ( 22) 00:10:07.056 10264.675 - 10317.314: 90.2218% ( 24) 00:10:07.056 10317.314 - 10369.953: 90.3874% ( 23) 00:10:07.056 10369.953 - 10422.593: 90.5242% ( 19) 00:10:07.056 10422.593 - 10475.232: 90.6754% ( 21) 00:10:07.056 10475.232 - 10527.871: 90.8266% ( 21) 00:10:07.056 10527.871 - 10580.511: 91.0210% ( 27) 00:10:07.056 10580.511 - 10633.150: 91.1938% ( 24) 00:10:07.056 10633.150 - 10685.790: 91.3594% ( 23) 00:10:07.056 10685.790 - 10738.429: 91.5323% ( 24) 00:10:07.056 10738.429 - 10791.068: 91.7483% ( 30) 00:10:07.056 10791.068 - 10843.708: 91.9715% ( 31) 00:10:07.056 10843.708 - 10896.347: 92.2595% ( 40) 00:10:07.056 10896.347 - 10948.986: 92.5907% ( 46) 00:10:07.056 10948.986 - 11001.626: 92.9075% ( 44) 00:10:07.056 11001.626 - 11054.265: 93.2172% ( 43) 00:10:07.056 11054.265 - 11106.904: 93.5484% ( 46) 00:10:07.056 11106.904 - 11159.544: 93.8724% ( 45) 00:10:07.056 11159.544 - 11212.183: 94.2036% ( 46) 00:10:07.056 11212.183 - 11264.822: 94.5204% ( 44) 00:10:07.056 11264.822 - 11317.462: 94.8301% ( 43) 00:10:07.056 11317.462 - 11370.101: 95.1397% ( 43) 00:10:07.056 11370.101 - 11422.741: 95.4493% ( 43) 00:10:07.056 11422.741 - 11475.380: 95.7589% ( 43) 00:10:07.056 11475.380 - 11528.019: 96.0901% ( 46) 00:10:07.056 11528.019 - 11580.659: 96.3710% ( 39) 00:10:07.056 11580.659 - 11633.298: 96.6518% ( 39) 00:10:07.056 11633.298 - 11685.937: 96.9110% ( 36) 00:10:07.056 11685.937 - 11738.577: 97.1054% ( 27) 00:10:07.056 11738.577 - 11791.216: 97.2494% ( 20) 00:10:07.056 11791.216 - 11843.855: 97.3502% ( 14) 00:10:07.056 11843.855 - 11896.495: 97.4294% ( 11) 00:10:07.056 11896.495 - 11949.134: 97.4942% ( 9) 00:10:07.056 11949.134 - 12001.773: 97.5086% ( 2) 00:10:07.056 12001.773 - 12054.413: 97.5374% ( 4) 00:10:07.056 12054.413 - 12107.052: 97.5662% ( 4) 00:10:07.056 12107.052 - 12159.692: 97.5950% ( 4) 00:10:07.056 12159.692 - 12212.331: 97.6166% ( 3) 00:10:07.056 12212.331 - 12264.970: 97.6599% ( 6) 00:10:07.056 12264.970 - 12317.610: 97.6959% ( 5) 00:10:07.056 12317.610 - 12370.249: 97.7463% ( 7) 00:10:07.056 12370.249 - 12422.888: 97.7895% ( 6) 00:10:07.056 12422.888 - 12475.528: 97.8327% ( 6) 00:10:07.056 12475.528 - 12528.167: 97.8831% ( 7) 00:10:07.056 12528.167 - 12580.806: 97.9263% ( 6) 00:10:07.056 12580.806 - 12633.446: 97.9767% ( 7) 00:10:07.056 12633.446 - 12686.085: 98.0127% ( 5) 00:10:07.056 12686.085 - 12738.724: 98.0559% ( 6) 00:10:07.056 12738.724 - 12791.364: 98.1063% ( 7) 00:10:07.056 12791.364 - 12844.003: 98.1351% ( 4) 00:10:07.056 12844.003 - 12896.643: 98.1711% ( 5) 00:10:07.056 12896.643 - 12949.282: 98.2071% ( 5) 00:10:07.056 12949.282 - 13001.921: 98.2431% ( 5) 00:10:07.056 13001.921 - 13054.561: 98.2791% ( 5) 00:10:07.056 13054.561 - 13107.200: 98.3079% ( 4) 00:10:07.056 13107.200 - 13159.839: 98.3439% ( 5) 00:10:07.056 13159.839 - 13212.479: 98.3727% ( 4) 00:10:07.056 13212.479 - 13265.118: 98.4087% ( 5) 00:10:07.056 13265.118 - 13317.757: 98.4447% ( 5) 00:10:07.056 13317.757 - 13370.397: 98.4735% ( 4) 00:10:07.056 13370.397 - 13423.036: 98.5095% ( 5) 00:10:07.056 13423.036 - 13475.676: 98.5383% ( 4) 00:10:07.056 13475.676 - 13580.954: 98.6247% ( 12) 00:10:07.056 13580.954 - 13686.233: 98.6751% ( 7) 00:10:07.056 13686.233 - 13791.512: 98.7039% ( 4) 00:10:07.056 13791.512 - 13896.790: 98.7255% ( 3) 00:10:07.056 13896.790 - 14002.069: 98.7471% ( 3) 00:10:07.056 14002.069 - 14107.348: 98.7687% ( 3) 00:10:07.056 14107.348 - 14212.627: 98.7975% ( 4) 00:10:07.056 14212.627 - 14317.905: 98.8191% ( 3) 00:10:07.056 14317.905 - 14423.184: 98.8407% ( 3) 00:10:07.056 14423.184 - 14528.463: 98.8695% ( 4) 00:10:07.056 14528.463 - 14633.741: 98.8911% ( 3) 00:10:07.056 14633.741 - 14739.020: 98.9199% ( 4) 00:10:07.056 14739.020 - 14844.299: 98.9415% ( 3) 00:10:07.056 14844.299 - 14949.578: 98.9703% ( 4) 00:10:07.056 14949.578 - 15054.856: 98.9991% ( 4) 00:10:07.056 15054.856 - 15160.135: 99.0207% ( 3) 00:10:07.056 15160.135 - 15265.414: 99.0495% ( 4) 00:10:07.056 15265.414 - 15370.692: 99.0711% ( 3) 00:10:07.056 15370.692 - 15475.971: 99.0783% ( 1) 00:10:07.056 35163.091 - 35373.648: 99.0999% ( 3) 00:10:07.056 35373.648 - 35584.206: 99.1431% ( 6) 00:10:07.056 35584.206 - 35794.763: 99.1935% ( 7) 00:10:07.056 35794.763 - 36005.320: 99.2440% ( 7) 00:10:07.056 36005.320 - 36215.878: 99.2944% ( 7) 00:10:07.056 36215.878 - 36426.435: 99.3448% ( 7) 00:10:07.057 36426.435 - 36636.993: 99.3952% ( 7) 00:10:07.057 36636.993 - 36847.550: 99.4456% ( 7) 00:10:07.057 36847.550 - 37058.108: 99.4888% ( 6) 00:10:07.057 37058.108 - 37268.665: 99.5392% ( 7) 00:10:07.057 42953.716 - 43164.273: 99.5536% ( 2) 00:10:07.057 43164.273 - 43374.831: 99.5896% ( 5) 00:10:07.057 43374.831 - 43585.388: 99.6256% ( 5) 00:10:07.057 43585.388 - 43795.945: 99.6688% ( 6) 00:10:07.057 43795.945 - 44006.503: 99.7048% ( 5) 00:10:07.057 44006.503 - 44217.060: 99.7480% ( 6) 00:10:07.057 44217.060 - 44427.618: 99.7912% ( 6) 00:10:07.057 44427.618 - 44638.175: 99.8344% ( 6) 00:10:07.057 44638.175 - 44848.733: 99.8776% ( 6) 00:10:07.057 44848.733 - 45059.290: 99.9280% ( 7) 00:10:07.057 45059.290 - 45269.847: 99.9712% ( 6) 00:10:07.057 45269.847 - 45480.405: 100.0000% ( 4) 00:10:07.057 00:10:07.057 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:07.057 ============================================================================== 00:10:07.057 Range in us Cumulative IO count 00:10:07.057 7843.264 - 7895.904: 0.0288% ( 4) 00:10:07.057 7895.904 - 7948.543: 0.1008% ( 10) 00:10:07.057 7948.543 - 8001.182: 0.2736% ( 24) 00:10:07.057 8001.182 - 8053.822: 0.7560% ( 67) 00:10:07.057 8053.822 - 8106.461: 1.9729% ( 169) 00:10:07.057 8106.461 - 8159.100: 4.0035% ( 282) 00:10:07.057 8159.100 - 8211.740: 6.9268% ( 406) 00:10:07.057 8211.740 - 8264.379: 10.4119% ( 484) 00:10:07.057 8264.379 - 8317.018: 14.3649% ( 549) 00:10:07.057 8317.018 - 8369.658: 18.7068% ( 603) 00:10:07.057 8369.658 - 8422.297: 23.1855% ( 622) 00:10:07.057 8422.297 - 8474.937: 27.8874% ( 653) 00:10:07.057 8474.937 - 8527.576: 32.7189% ( 671) 00:10:07.057 8527.576 - 8580.215: 37.8600% ( 714) 00:10:07.057 8580.215 - 8632.855: 42.9579% ( 708) 00:10:07.057 8632.855 - 8685.494: 48.0991% ( 714) 00:10:07.057 8685.494 - 8738.133: 53.2546% ( 716) 00:10:07.057 8738.133 - 8790.773: 58.4749% ( 725) 00:10:07.057 8790.773 - 8843.412: 63.6089% ( 713) 00:10:07.057 8843.412 - 8896.051: 68.3756% ( 662) 00:10:07.057 8896.051 - 8948.691: 72.4222% ( 562) 00:10:07.057 8948.691 - 9001.330: 75.5688% ( 437) 00:10:07.057 9001.330 - 9053.969: 77.9810% ( 335) 00:10:07.057 9053.969 - 9106.609: 79.9467% ( 273) 00:10:07.057 9106.609 - 9159.248: 81.5164% ( 218) 00:10:07.057 9159.248 - 9211.888: 82.8053% ( 179) 00:10:07.057 9211.888 - 9264.527: 83.9502% ( 159) 00:10:07.057 9264.527 - 9317.166: 84.8862% ( 130) 00:10:07.057 9317.166 - 9369.806: 85.4983% ( 85) 00:10:07.057 9369.806 - 9422.445: 85.9519% ( 63) 00:10:07.057 9422.445 - 9475.084: 86.4055% ( 63) 00:10:07.057 9475.084 - 9527.724: 86.8016% ( 55) 00:10:07.057 9527.724 - 9580.363: 87.1904% ( 54) 00:10:07.057 9580.363 - 9633.002: 87.5288% ( 47) 00:10:07.057 9633.002 - 9685.642: 87.8024% ( 38) 00:10:07.057 9685.642 - 9738.281: 88.1048% ( 42) 00:10:07.057 9738.281 - 9790.920: 88.3497% ( 34) 00:10:07.057 9790.920 - 9843.560: 88.6089% ( 36) 00:10:07.057 9843.560 - 9896.199: 88.8321% ( 31) 00:10:07.057 9896.199 - 9948.839: 89.0265% ( 27) 00:10:07.057 9948.839 - 10001.478: 89.2281% ( 28) 00:10:07.057 10001.478 - 10054.117: 89.3937% ( 23) 00:10:07.057 10054.117 - 10106.757: 89.5737% ( 25) 00:10:07.057 10106.757 - 10159.396: 89.7537% ( 25) 00:10:07.057 10159.396 - 10212.035: 89.9410% ( 26) 00:10:07.057 10212.035 - 10264.675: 90.1066% ( 23) 00:10:07.057 10264.675 - 10317.314: 90.2722% ( 23) 00:10:07.057 10317.314 - 10369.953: 90.4450% ( 24) 00:10:07.057 10369.953 - 10422.593: 90.6250% ( 25) 00:10:07.057 10422.593 - 10475.232: 90.8122% ( 26) 00:10:07.057 10475.232 - 10527.871: 90.9850% ( 24) 00:10:07.057 10527.871 - 10580.511: 91.1434% ( 22) 00:10:07.057 10580.511 - 10633.150: 91.2802% ( 19) 00:10:07.057 10633.150 - 10685.790: 91.4459% ( 23) 00:10:07.057 10685.790 - 10738.429: 91.6763% ( 32) 00:10:07.057 10738.429 - 10791.068: 91.9139% ( 33) 00:10:07.057 10791.068 - 10843.708: 92.1947% ( 39) 00:10:07.057 10843.708 - 10896.347: 92.4827% ( 40) 00:10:07.057 10896.347 - 10948.986: 92.7995% ( 44) 00:10:07.057 10948.986 - 11001.626: 93.1236% ( 45) 00:10:07.057 11001.626 - 11054.265: 93.4188% ( 41) 00:10:07.057 11054.265 - 11106.904: 93.7572% ( 47) 00:10:07.057 11106.904 - 11159.544: 94.0956% ( 47) 00:10:07.057 11159.544 - 11212.183: 94.4196% ( 45) 00:10:07.057 11212.183 - 11264.822: 94.7365% ( 44) 00:10:07.057 11264.822 - 11317.462: 95.0677% ( 46) 00:10:07.057 11317.462 - 11370.101: 95.3845% ( 44) 00:10:07.057 11370.101 - 11422.741: 95.6581% ( 38) 00:10:07.057 11422.741 - 11475.380: 95.9533% ( 41) 00:10:07.057 11475.380 - 11528.019: 96.2126% ( 36) 00:10:07.057 11528.019 - 11580.659: 96.4718% ( 36) 00:10:07.057 11580.659 - 11633.298: 96.7166% ( 34) 00:10:07.057 11633.298 - 11685.937: 96.9470% ( 32) 00:10:07.057 11685.937 - 11738.577: 97.1342% ( 26) 00:10:07.057 11738.577 - 11791.216: 97.2494% ( 16) 00:10:07.057 11791.216 - 11843.855: 97.3430% ( 13) 00:10:07.057 11843.855 - 11896.495: 97.4150% ( 10) 00:10:07.057 11896.495 - 11949.134: 97.4798% ( 9) 00:10:07.057 11949.134 - 12001.773: 97.5518% ( 10) 00:10:07.057 12001.773 - 12054.413: 97.6238% ( 10) 00:10:07.057 12054.413 - 12107.052: 97.6671% ( 6) 00:10:07.057 12107.052 - 12159.692: 97.7031% ( 5) 00:10:07.057 12159.692 - 12212.331: 97.7391% ( 5) 00:10:07.057 12212.331 - 12264.970: 97.7895% ( 7) 00:10:07.057 12264.970 - 12317.610: 97.8183% ( 4) 00:10:07.057 12317.610 - 12370.249: 97.8327% ( 2) 00:10:07.057 12370.249 - 12422.888: 97.8543% ( 3) 00:10:07.057 12422.888 - 12475.528: 97.8687% ( 2) 00:10:07.057 12475.528 - 12528.167: 97.8831% ( 2) 00:10:07.057 12528.167 - 12580.806: 97.8975% ( 2) 00:10:07.057 12580.806 - 12633.446: 97.9191% ( 3) 00:10:07.057 12633.446 - 12686.085: 97.9479% ( 4) 00:10:07.057 12686.085 - 12738.724: 97.9839% ( 5) 00:10:07.057 12738.724 - 12791.364: 98.0199% ( 5) 00:10:07.057 12791.364 - 12844.003: 98.0415% ( 3) 00:10:07.057 12844.003 - 12896.643: 98.0775% ( 5) 00:10:07.057 12896.643 - 12949.282: 98.1063% ( 4) 00:10:07.057 12949.282 - 13001.921: 98.1423% ( 5) 00:10:07.057 13001.921 - 13054.561: 98.1927% ( 7) 00:10:07.057 13054.561 - 13107.200: 98.2575% ( 9) 00:10:07.057 13107.200 - 13159.839: 98.3007% ( 6) 00:10:07.057 13159.839 - 13212.479: 98.3439% ( 6) 00:10:07.057 13212.479 - 13265.118: 98.3871% ( 6) 00:10:07.057 13265.118 - 13317.757: 98.4303% ( 6) 00:10:07.057 13317.757 - 13370.397: 98.4591% ( 4) 00:10:07.057 13370.397 - 13423.036: 98.4951% ( 5) 00:10:07.057 13423.036 - 13475.676: 98.5239% ( 4) 00:10:07.057 13475.676 - 13580.954: 98.5599% ( 5) 00:10:07.057 13580.954 - 13686.233: 98.6247% ( 9) 00:10:07.057 13686.233 - 13791.512: 98.6751% ( 7) 00:10:07.057 13791.512 - 13896.790: 98.7183% ( 6) 00:10:07.057 13896.790 - 14002.069: 98.7831% ( 9) 00:10:07.057 14002.069 - 14107.348: 98.8335% ( 7) 00:10:07.057 14107.348 - 14212.627: 98.8911% ( 8) 00:10:07.057 14212.627 - 14317.905: 98.9343% ( 6) 00:10:07.057 14317.905 - 14423.184: 98.9559% ( 3) 00:10:07.057 14423.184 - 14528.463: 98.9847% ( 4) 00:10:07.057 14528.463 - 14633.741: 99.0063% ( 3) 00:10:07.057 14633.741 - 14739.020: 99.0279% ( 3) 00:10:07.057 14739.020 - 14844.299: 99.0495% ( 3) 00:10:07.057 14844.299 - 14949.578: 99.0783% ( 4) 00:10:07.057 33478.631 - 33689.189: 99.1143% ( 5) 00:10:07.057 33689.189 - 33899.746: 99.1575% ( 6) 00:10:07.057 33899.746 - 34110.304: 99.2079% ( 7) 00:10:07.057 34110.304 - 34320.861: 99.2584% ( 7) 00:10:07.057 34320.861 - 34531.418: 99.3016% ( 6) 00:10:07.057 34531.418 - 34741.976: 99.3376% ( 5) 00:10:07.057 34741.976 - 34952.533: 99.3880% ( 7) 00:10:07.057 34952.533 - 35163.091: 99.4456% ( 8) 00:10:07.057 35163.091 - 35373.648: 99.4816% ( 5) 00:10:07.057 35373.648 - 35584.206: 99.5320% ( 7) 00:10:07.057 35584.206 - 35794.763: 99.5392% ( 1) 00:10:07.057 41479.814 - 41690.371: 99.5752% ( 5) 00:10:07.057 41690.371 - 41900.929: 99.6184% ( 6) 00:10:07.057 41900.929 - 42111.486: 99.6616% ( 6) 00:10:07.057 42111.486 - 42322.043: 99.6976% ( 5) 00:10:07.057 42322.043 - 42532.601: 99.7408% ( 6) 00:10:07.057 42532.601 - 42743.158: 99.7840% ( 6) 00:10:07.057 42743.158 - 42953.716: 99.8272% ( 6) 00:10:07.057 42953.716 - 43164.273: 99.8704% ( 6) 00:10:07.057 43164.273 - 43374.831: 99.9208% ( 7) 00:10:07.057 43374.831 - 43585.388: 99.9568% ( 5) 00:10:07.057 43585.388 - 43795.945: 99.9928% ( 5) 00:10:07.057 43795.945 - 44006.503: 100.0000% ( 1) 00:10:07.057 00:10:07.057 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:07.057 ============================================================================== 00:10:07.057 Range in us Cumulative IO count 00:10:07.057 7843.264 - 7895.904: 0.0216% ( 3) 00:10:07.057 7895.904 - 7948.543: 0.1008% ( 11) 00:10:07.057 7948.543 - 8001.182: 0.2592% ( 22) 00:10:07.057 8001.182 - 8053.822: 0.8281% ( 79) 00:10:07.057 8053.822 - 8106.461: 1.8577% ( 143) 00:10:07.057 8106.461 - 8159.100: 3.8162% ( 272) 00:10:07.057 8159.100 - 8211.740: 6.7036% ( 401) 00:10:07.057 8211.740 - 8264.379: 10.3399% ( 505) 00:10:07.057 8264.379 - 8317.018: 14.3505% ( 557) 00:10:07.057 8317.018 - 8369.658: 18.4476% ( 569) 00:10:07.057 8369.658 - 8422.297: 23.1567% ( 654) 00:10:07.057 8422.297 - 8474.937: 28.0242% ( 676) 00:10:07.057 8474.937 - 8527.576: 33.1005% ( 705) 00:10:07.057 8527.576 - 8580.215: 38.1120% ( 696) 00:10:07.057 8580.215 - 8632.855: 43.2676% ( 716) 00:10:07.057 8632.855 - 8685.494: 48.3367% ( 704) 00:10:07.057 8685.494 - 8738.133: 53.4490% ( 710) 00:10:07.057 8738.133 - 8790.773: 58.5757% ( 712) 00:10:07.058 8790.773 - 8843.412: 63.6665% ( 707) 00:10:07.058 8843.412 - 8896.051: 68.4044% ( 658) 00:10:07.058 8896.051 - 8948.691: 72.4150% ( 557) 00:10:07.058 8948.691 - 9001.330: 75.7272% ( 460) 00:10:07.058 9001.330 - 9053.969: 78.2114% ( 345) 00:10:07.058 9053.969 - 9106.609: 80.0907% ( 261) 00:10:07.058 9106.609 - 9159.248: 81.5956% ( 209) 00:10:07.058 9159.248 - 9211.888: 82.7765% ( 164) 00:10:07.058 9211.888 - 9264.527: 83.7270% ( 132) 00:10:07.058 9264.527 - 9317.166: 84.4902% ( 106) 00:10:07.058 9317.166 - 9369.806: 85.0518% ( 78) 00:10:07.058 9369.806 - 9422.445: 85.5199% ( 65) 00:10:07.058 9422.445 - 9475.084: 85.9519% ( 60) 00:10:07.058 9475.084 - 9527.724: 86.3623% ( 57) 00:10:07.058 9527.724 - 9580.363: 86.8088% ( 62) 00:10:07.058 9580.363 - 9633.002: 87.2120% ( 56) 00:10:07.058 9633.002 - 9685.642: 87.5792% ( 51) 00:10:07.058 9685.642 - 9738.281: 87.9464% ( 51) 00:10:07.058 9738.281 - 9790.920: 88.2560% ( 43) 00:10:07.058 9790.920 - 9843.560: 88.5081% ( 35) 00:10:07.058 9843.560 - 9896.199: 88.7385% ( 32) 00:10:07.058 9896.199 - 9948.839: 88.9473% ( 29) 00:10:07.058 9948.839 - 10001.478: 89.1417% ( 27) 00:10:07.058 10001.478 - 10054.117: 89.4081% ( 37) 00:10:07.058 10054.117 - 10106.757: 89.5809% ( 24) 00:10:07.058 10106.757 - 10159.396: 89.7249% ( 20) 00:10:07.058 10159.396 - 10212.035: 89.9266% ( 28) 00:10:07.058 10212.035 - 10264.675: 90.1138% ( 26) 00:10:07.058 10264.675 - 10317.314: 90.2866% ( 24) 00:10:07.058 10317.314 - 10369.953: 90.4810% ( 27) 00:10:07.058 10369.953 - 10422.593: 90.6106% ( 18) 00:10:07.058 10422.593 - 10475.232: 90.7690% ( 22) 00:10:07.058 10475.232 - 10527.871: 90.9418% ( 24) 00:10:07.058 10527.871 - 10580.511: 91.0714% ( 18) 00:10:07.058 10580.511 - 10633.150: 91.2586% ( 26) 00:10:07.058 10633.150 - 10685.790: 91.4387% ( 25) 00:10:07.058 10685.790 - 10738.429: 91.6691% ( 32) 00:10:07.058 10738.429 - 10791.068: 91.9499% ( 39) 00:10:07.058 10791.068 - 10843.708: 92.2667% ( 44) 00:10:07.058 10843.708 - 10896.347: 92.5763% ( 43) 00:10:07.058 10896.347 - 10948.986: 92.8931% ( 44) 00:10:07.058 10948.986 - 11001.626: 93.2316% ( 47) 00:10:07.058 11001.626 - 11054.265: 93.5628% ( 46) 00:10:07.058 11054.265 - 11106.904: 93.8940% ( 46) 00:10:07.058 11106.904 - 11159.544: 94.2252% ( 46) 00:10:07.058 11159.544 - 11212.183: 94.5276% ( 42) 00:10:07.058 11212.183 - 11264.822: 94.8373% ( 43) 00:10:07.058 11264.822 - 11317.462: 95.1181% ( 39) 00:10:07.058 11317.462 - 11370.101: 95.4061% ( 40) 00:10:07.058 11370.101 - 11422.741: 95.7013% ( 41) 00:10:07.058 11422.741 - 11475.380: 95.9677% ( 37) 00:10:07.058 11475.380 - 11528.019: 96.2630% ( 41) 00:10:07.058 11528.019 - 11580.659: 96.5366% ( 38) 00:10:07.058 11580.659 - 11633.298: 96.8030% ( 37) 00:10:07.058 11633.298 - 11685.937: 97.0550% ( 35) 00:10:07.058 11685.937 - 11738.577: 97.2278% ( 24) 00:10:07.058 11738.577 - 11791.216: 97.3862% ( 22) 00:10:07.058 11791.216 - 11843.855: 97.5086% ( 17) 00:10:07.058 11843.855 - 11896.495: 97.5806% ( 10) 00:10:07.058 11896.495 - 11949.134: 97.6526% ( 10) 00:10:07.058 11949.134 - 12001.773: 97.7031% ( 7) 00:10:07.058 12001.773 - 12054.413: 97.7463% ( 6) 00:10:07.058 12054.413 - 12107.052: 97.7751% ( 4) 00:10:07.058 12107.052 - 12159.692: 97.8039% ( 4) 00:10:07.058 12159.692 - 12212.331: 97.8399% ( 5) 00:10:07.058 12212.331 - 12264.970: 97.8687% ( 4) 00:10:07.058 12264.970 - 12317.610: 97.8975% ( 4) 00:10:07.058 12317.610 - 12370.249: 97.9263% ( 4) 00:10:07.058 12370.249 - 12422.888: 97.9623% ( 5) 00:10:07.058 12422.888 - 12475.528: 97.9839% ( 3) 00:10:07.058 12475.528 - 12528.167: 97.9983% ( 2) 00:10:07.058 12528.167 - 12580.806: 98.0415% ( 6) 00:10:07.058 12580.806 - 12633.446: 98.0703% ( 4) 00:10:07.058 12633.446 - 12686.085: 98.0919% ( 3) 00:10:07.058 12686.085 - 12738.724: 98.1279% ( 5) 00:10:07.058 12738.724 - 12791.364: 98.1495% ( 3) 00:10:07.058 12791.364 - 12844.003: 98.1711% ( 3) 00:10:07.058 12844.003 - 12896.643: 98.1999% ( 4) 00:10:07.058 12896.643 - 12949.282: 98.2215% ( 3) 00:10:07.058 12949.282 - 13001.921: 98.2431% ( 3) 00:10:07.058 13001.921 - 13054.561: 98.2719% ( 4) 00:10:07.058 13054.561 - 13107.200: 98.2935% ( 3) 00:10:07.058 13107.200 - 13159.839: 98.3223% ( 4) 00:10:07.058 13159.839 - 13212.479: 98.3295% ( 1) 00:10:07.058 13212.479 - 13265.118: 98.3439% ( 2) 00:10:07.058 13265.118 - 13317.757: 98.3583% ( 2) 00:10:07.058 13317.757 - 13370.397: 98.3871% ( 4) 00:10:07.058 13370.397 - 13423.036: 98.4159% ( 4) 00:10:07.058 13423.036 - 13475.676: 98.4303% ( 2) 00:10:07.058 13475.676 - 13580.954: 98.4879% ( 8) 00:10:07.058 13580.954 - 13686.233: 98.5455% ( 8) 00:10:07.058 13686.233 - 13791.512: 98.5959% ( 7) 00:10:07.058 13791.512 - 13896.790: 98.6535% ( 8) 00:10:07.058 13896.790 - 14002.069: 98.7039% ( 7) 00:10:07.058 14002.069 - 14107.348: 98.7543% ( 7) 00:10:07.058 14107.348 - 14212.627: 98.8191% ( 9) 00:10:07.058 14212.627 - 14317.905: 98.8839% ( 9) 00:10:07.058 14317.905 - 14423.184: 98.9487% ( 9) 00:10:07.058 14423.184 - 14528.463: 98.9919% ( 6) 00:10:07.058 14528.463 - 14633.741: 99.0351% ( 6) 00:10:07.058 14633.741 - 14739.020: 99.0783% ( 6) 00:10:07.058 31583.614 - 31794.172: 99.0927% ( 2) 00:10:07.058 31794.172 - 32004.729: 99.1287% ( 5) 00:10:07.058 32004.729 - 32215.287: 99.1719% ( 6) 00:10:07.058 32215.287 - 32425.844: 99.2079% ( 5) 00:10:07.058 32425.844 - 32636.402: 99.2440% ( 5) 00:10:07.058 32636.402 - 32846.959: 99.2872% ( 6) 00:10:07.058 32846.959 - 33057.516: 99.3376% ( 7) 00:10:07.058 33057.516 - 33268.074: 99.3808% ( 6) 00:10:07.058 33268.074 - 33478.631: 99.4240% ( 6) 00:10:07.058 33478.631 - 33689.189: 99.4672% ( 6) 00:10:07.058 33689.189 - 33899.746: 99.5104% ( 6) 00:10:07.058 33899.746 - 34110.304: 99.5392% ( 4) 00:10:07.058 39795.354 - 40005.912: 99.5824% ( 6) 00:10:07.058 40005.912 - 40216.469: 99.6184% ( 5) 00:10:07.058 40216.469 - 40427.027: 99.6616% ( 6) 00:10:07.058 40427.027 - 40637.584: 99.6976% ( 5) 00:10:07.058 40637.584 - 40848.141: 99.7408% ( 6) 00:10:07.058 40848.141 - 41058.699: 99.7840% ( 6) 00:10:07.058 41058.699 - 41269.256: 99.8272% ( 6) 00:10:07.058 41269.256 - 41479.814: 99.8704% ( 6) 00:10:07.058 41479.814 - 41690.371: 99.9136% ( 6) 00:10:07.058 41690.371 - 41900.929: 99.9496% ( 5) 00:10:07.058 41900.929 - 42111.486: 100.0000% ( 7) 00:10:07.058 00:10:07.058 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:07.058 ============================================================================== 00:10:07.058 Range in us Cumulative IO count 00:10:07.058 7843.264 - 7895.904: 0.0144% ( 2) 00:10:07.058 7895.904 - 7948.543: 0.0720% ( 8) 00:10:07.058 7948.543 - 8001.182: 0.2088% ( 19) 00:10:07.058 8001.182 - 8053.822: 0.7776% ( 79) 00:10:07.058 8053.822 - 8106.461: 2.0593% ( 178) 00:10:07.058 8106.461 - 8159.100: 4.0035% ( 270) 00:10:07.058 8159.100 - 8211.740: 6.8116% ( 390) 00:10:07.058 8211.740 - 8264.379: 10.2319% ( 475) 00:10:07.058 8264.379 - 8317.018: 14.1633% ( 546) 00:10:07.058 8317.018 - 8369.658: 18.4980% ( 602) 00:10:07.058 8369.658 - 8422.297: 23.1279% ( 643) 00:10:07.058 8422.297 - 8474.937: 27.7290% ( 639) 00:10:07.058 8474.937 - 8527.576: 32.8485% ( 711) 00:10:07.058 8527.576 - 8580.215: 37.9968% ( 715) 00:10:07.058 8580.215 - 8632.855: 43.0300% ( 699) 00:10:07.058 8632.855 - 8685.494: 48.0991% ( 704) 00:10:07.058 8685.494 - 8738.133: 53.3266% ( 726) 00:10:07.058 8738.133 - 8790.773: 58.5181% ( 721) 00:10:07.058 8790.773 - 8843.412: 63.6305% ( 710) 00:10:07.058 8843.412 - 8896.051: 68.2964% ( 648) 00:10:07.058 8896.051 - 8948.691: 72.3790% ( 567) 00:10:07.058 8948.691 - 9001.330: 75.6696% ( 457) 00:10:07.058 9001.330 - 9053.969: 78.1538% ( 345) 00:10:07.058 9053.969 - 9106.609: 80.1195% ( 273) 00:10:07.058 9106.609 - 9159.248: 81.6244% ( 209) 00:10:07.058 9159.248 - 9211.888: 82.8485% ( 170) 00:10:07.058 9211.888 - 9264.527: 83.7990% ( 132) 00:10:07.058 9264.527 - 9317.166: 84.6702% ( 121) 00:10:07.058 9317.166 - 9369.806: 85.3471% ( 94) 00:10:07.058 9369.806 - 9422.445: 85.8655% ( 72) 00:10:07.058 9422.445 - 9475.084: 86.3263% ( 64) 00:10:07.059 9475.084 - 9527.724: 86.7296% ( 56) 00:10:07.059 9527.724 - 9580.363: 87.0896% ( 50) 00:10:07.059 9580.363 - 9633.002: 87.4640% ( 52) 00:10:07.059 9633.002 - 9685.642: 87.7952% ( 46) 00:10:07.059 9685.642 - 9738.281: 88.1120% ( 44) 00:10:07.059 9738.281 - 9790.920: 88.3425% ( 32) 00:10:07.059 9790.920 - 9843.560: 88.5441% ( 28) 00:10:07.059 9843.560 - 9896.199: 88.7529% ( 29) 00:10:07.059 9896.199 - 9948.839: 88.9905% ( 33) 00:10:07.059 9948.839 - 10001.478: 89.2497% ( 36) 00:10:07.059 10001.478 - 10054.117: 89.4585% ( 29) 00:10:07.059 10054.117 - 10106.757: 89.6313% ( 24) 00:10:07.059 10106.757 - 10159.396: 89.8041% ( 24) 00:10:07.059 10159.396 - 10212.035: 89.9482% ( 20) 00:10:07.059 10212.035 - 10264.675: 90.0778% ( 18) 00:10:07.059 10264.675 - 10317.314: 90.2002% ( 17) 00:10:07.059 10317.314 - 10369.953: 90.3370% ( 19) 00:10:07.059 10369.953 - 10422.593: 90.4882% ( 21) 00:10:07.059 10422.593 - 10475.232: 90.6394% ( 21) 00:10:07.059 10475.232 - 10527.871: 90.7978% ( 22) 00:10:07.059 10527.871 - 10580.511: 90.9562% ( 22) 00:10:07.059 10580.511 - 10633.150: 91.1218% ( 23) 00:10:07.059 10633.150 - 10685.790: 91.3234% ( 28) 00:10:07.059 10685.790 - 10738.429: 91.5683% ( 34) 00:10:07.059 10738.429 - 10791.068: 91.8419% ( 38) 00:10:07.059 10791.068 - 10843.708: 92.1155% ( 38) 00:10:07.059 10843.708 - 10896.347: 92.3963% ( 39) 00:10:07.059 10896.347 - 10948.986: 92.7275% ( 46) 00:10:07.059 10948.986 - 11001.626: 93.0732% ( 48) 00:10:07.059 11001.626 - 11054.265: 93.3828% ( 43) 00:10:07.059 11054.265 - 11106.904: 93.7068% ( 45) 00:10:07.059 11106.904 - 11159.544: 93.9948% ( 40) 00:10:07.059 11159.544 - 11212.183: 94.3116% ( 44) 00:10:07.059 11212.183 - 11264.822: 94.6069% ( 41) 00:10:07.059 11264.822 - 11317.462: 94.9237% ( 44) 00:10:07.059 11317.462 - 11370.101: 95.2477% ( 45) 00:10:07.059 11370.101 - 11422.741: 95.5501% ( 42) 00:10:07.059 11422.741 - 11475.380: 95.8525% ( 42) 00:10:07.059 11475.380 - 11528.019: 96.1838% ( 46) 00:10:07.059 11528.019 - 11580.659: 96.4574% ( 38) 00:10:07.059 11580.659 - 11633.298: 96.7526% ( 41) 00:10:07.059 11633.298 - 11685.937: 96.9902% ( 33) 00:10:07.059 11685.937 - 11738.577: 97.1702% ( 25) 00:10:07.059 11738.577 - 11791.216: 97.3286% ( 22) 00:10:07.059 11791.216 - 11843.855: 97.4726% ( 20) 00:10:07.059 11843.855 - 11896.495: 97.5734% ( 14) 00:10:07.059 11896.495 - 11949.134: 97.6599% ( 12) 00:10:07.059 11949.134 - 12001.773: 97.7319% ( 10) 00:10:07.059 12001.773 - 12054.413: 97.7679% ( 5) 00:10:07.059 12054.413 - 12107.052: 97.8183% ( 7) 00:10:07.059 12107.052 - 12159.692: 97.8543% ( 5) 00:10:07.059 12159.692 - 12212.331: 97.8975% ( 6) 00:10:07.059 12212.331 - 12264.970: 97.9335% ( 5) 00:10:07.059 12264.970 - 12317.610: 97.9911% ( 8) 00:10:07.059 12317.610 - 12370.249: 98.0343% ( 6) 00:10:07.059 12370.249 - 12422.888: 98.0775% ( 6) 00:10:07.059 12422.888 - 12475.528: 98.1207% ( 6) 00:10:07.059 12475.528 - 12528.167: 98.1639% ( 6) 00:10:07.059 12528.167 - 12580.806: 98.2071% ( 6) 00:10:07.059 12580.806 - 12633.446: 98.2575% ( 7) 00:10:07.059 12633.446 - 12686.085: 98.3007% ( 6) 00:10:07.059 12686.085 - 12738.724: 98.3223% ( 3) 00:10:07.059 12738.724 - 12791.364: 98.3511% ( 4) 00:10:07.059 12791.364 - 12844.003: 98.3583% ( 1) 00:10:07.059 12844.003 - 12896.643: 98.3727% ( 2) 00:10:07.059 12896.643 - 12949.282: 98.3871% ( 2) 00:10:07.059 12949.282 - 13001.921: 98.4015% ( 2) 00:10:07.059 13001.921 - 13054.561: 98.4087% ( 1) 00:10:07.059 13054.561 - 13107.200: 98.4231% ( 2) 00:10:07.059 13107.200 - 13159.839: 98.4375% ( 2) 00:10:07.059 13159.839 - 13212.479: 98.4519% ( 2) 00:10:07.059 13212.479 - 13265.118: 98.4591% ( 1) 00:10:07.059 13265.118 - 13317.757: 98.4735% ( 2) 00:10:07.059 13317.757 - 13370.397: 98.4879% ( 2) 00:10:07.059 13370.397 - 13423.036: 98.5023% ( 2) 00:10:07.059 13423.036 - 13475.676: 98.5167% ( 2) 00:10:07.059 13475.676 - 13580.954: 98.5383% ( 3) 00:10:07.059 13580.954 - 13686.233: 98.5599% ( 3) 00:10:07.059 13686.233 - 13791.512: 98.5959% ( 5) 00:10:07.059 13791.512 - 13896.790: 98.6607% ( 9) 00:10:07.059 13896.790 - 14002.069: 98.7111% ( 7) 00:10:07.059 14002.069 - 14107.348: 98.7543% ( 6) 00:10:07.059 14107.348 - 14212.627: 98.7975% ( 6) 00:10:07.059 14212.627 - 14317.905: 98.8335% ( 5) 00:10:07.059 14317.905 - 14423.184: 98.8767% ( 6) 00:10:07.059 14423.184 - 14528.463: 98.9199% ( 6) 00:10:07.059 14528.463 - 14633.741: 98.9631% ( 6) 00:10:07.059 14633.741 - 14739.020: 99.0063% ( 6) 00:10:07.059 14739.020 - 14844.299: 99.0423% ( 5) 00:10:07.059 14844.299 - 14949.578: 99.0783% ( 5) 00:10:07.059 30109.712 - 30320.270: 99.1071% ( 4) 00:10:07.059 30320.270 - 30530.827: 99.1503% ( 6) 00:10:07.059 30530.827 - 30741.385: 99.1935% ( 6) 00:10:07.059 30741.385 - 30951.942: 99.2368% ( 6) 00:10:07.059 30951.942 - 31162.500: 99.2800% ( 6) 00:10:07.059 31162.500 - 31373.057: 99.3232% ( 6) 00:10:07.059 31373.057 - 31583.614: 99.3664% ( 6) 00:10:07.059 31583.614 - 31794.172: 99.4096% ( 6) 00:10:07.059 31794.172 - 32004.729: 99.4528% ( 6) 00:10:07.059 32004.729 - 32215.287: 99.4960% ( 6) 00:10:07.059 32215.287 - 32425.844: 99.5392% ( 6) 00:10:07.059 38110.895 - 38321.452: 99.5680% ( 4) 00:10:07.059 38321.452 - 38532.010: 99.6112% ( 6) 00:10:07.059 38532.010 - 38742.567: 99.6544% ( 6) 00:10:07.059 38742.567 - 38953.124: 99.6976% ( 6) 00:10:07.059 38953.124 - 39163.682: 99.7336% ( 5) 00:10:07.059 39163.682 - 39374.239: 99.7768% ( 6) 00:10:07.059 39374.239 - 39584.797: 99.8200% ( 6) 00:10:07.059 39584.797 - 39795.354: 99.8704% ( 7) 00:10:07.059 39795.354 - 40005.912: 99.9136% ( 6) 00:10:07.059 40005.912 - 40216.469: 99.9496% ( 5) 00:10:07.059 40216.469 - 40427.027: 100.0000% ( 7) 00:10:07.059 00:10:07.059 17:59:36 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:08.439 Initializing NVMe Controllers 00:10:08.439 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:08.439 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:08.439 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:08.439 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:08.439 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:08.439 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:08.439 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:08.439 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:08.439 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:08.439 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:08.439 Initialization complete. Launching workers. 00:10:08.439 ======================================================== 00:10:08.439 Latency(us) 00:10:08.439 Device Information : IOPS MiB/s Average min max 00:10:08.439 PCIE (0000:00:10.0) NSID 1 from core 0: 14861.94 174.16 8631.54 6488.89 53058.74 00:10:08.439 PCIE (0000:00:11.0) NSID 1 from core 0: 14861.94 174.16 8617.64 6595.66 51222.49 00:10:08.439 PCIE (0000:00:13.0) NSID 1 from core 0: 14861.94 174.16 8603.96 6622.48 50552.16 00:10:08.439 PCIE (0000:00:12.0) NSID 1 from core 0: 14861.94 174.16 8590.27 6715.08 48746.65 00:10:08.439 PCIE (0000:00:12.0) NSID 2 from core 0: 14861.94 174.16 8576.29 6497.51 48015.24 00:10:08.439 PCIE (0000:00:12.0) NSID 3 from core 0: 14925.73 174.91 8526.01 6581.26 38416.96 00:10:08.439 ======================================================== 00:10:08.439 Total : 89235.45 1045.73 8590.90 6488.89 53058.74 00:10:08.439 00:10:08.439 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:08.439 ================================================================================= 00:10:08.439 1.00000% : 7001.035us 00:10:08.439 10.00000% : 7527.428us 00:10:08.439 25.00000% : 7790.625us 00:10:08.439 50.00000% : 8106.461us 00:10:08.439 75.00000% : 8474.937us 00:10:08.439 90.00000% : 8843.412us 00:10:08.439 95.00000% : 9738.281us 00:10:08.439 98.00000% : 16739.316us 00:10:08.439 99.00000% : 19160.726us 00:10:08.439 99.50000% : 43374.831us 00:10:08.439 99.90000% : 52639.357us 00:10:08.439 99.99000% : 53060.472us 00:10:08.439 99.99900% : 53060.472us 00:10:08.439 99.99990% : 53060.472us 00:10:08.439 99.99999% : 53060.472us 00:10:08.439 00:10:08.439 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:08.439 ================================================================================= 00:10:08.439 1.00000% : 7053.674us 00:10:08.439 10.00000% : 7580.067us 00:10:08.439 25.00000% : 7790.625us 00:10:08.439 50.00000% : 8106.461us 00:10:08.439 75.00000% : 8422.297us 00:10:08.439 90.00000% : 8790.773us 00:10:08.439 95.00000% : 9685.642us 00:10:08.439 98.00000% : 16844.594us 00:10:08.439 99.00000% : 19266.005us 00:10:08.439 99.50000% : 42111.486us 00:10:08.439 99.90000% : 50954.898us 00:10:08.439 99.99000% : 51376.013us 00:10:08.439 99.99900% : 51376.013us 00:10:08.439 99.99990% : 51376.013us 00:10:08.439 99.99999% : 51376.013us 00:10:08.439 00:10:08.439 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:08.439 ================================================================================= 00:10:08.439 1.00000% : 7001.035us 00:10:08.439 10.00000% : 7527.428us 00:10:08.439 25.00000% : 7790.625us 00:10:08.439 50.00000% : 8106.461us 00:10:08.439 75.00000% : 8474.937us 00:10:08.439 90.00000% : 8843.412us 00:10:08.439 95.00000% : 9580.363us 00:10:08.439 98.00000% : 17160.431us 00:10:08.439 99.00000% : 20108.235us 00:10:08.439 99.50000% : 41690.371us 00:10:08.439 99.90000% : 50323.226us 00:10:08.439 99.99000% : 50533.783us 00:10:08.439 99.99900% : 50744.341us 00:10:08.439 99.99990% : 50744.341us 00:10:08.439 99.99999% : 50744.341us 00:10:08.439 00:10:08.439 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:08.439 ================================================================================= 00:10:08.439 1.00000% : 7053.674us 00:10:08.439 10.00000% : 7527.428us 00:10:08.439 25.00000% : 7790.625us 00:10:08.439 50.00000% : 8106.461us 00:10:08.439 75.00000% : 8422.297us 00:10:08.439 90.00000% : 8843.412us 00:10:08.439 95.00000% : 9527.724us 00:10:08.439 98.00000% : 17160.431us 00:10:08.439 99.00000% : 20108.235us 00:10:08.439 99.50000% : 40005.912us 00:10:08.439 99.90000% : 48428.209us 00:10:08.439 99.99000% : 48849.324us 00:10:08.439 99.99900% : 48849.324us 00:10:08.439 99.99990% : 48849.324us 00:10:08.439 99.99999% : 48849.324us 00:10:08.439 00:10:08.439 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:08.439 ================================================================================= 00:10:08.439 1.00000% : 7053.674us 00:10:08.439 10.00000% : 7527.428us 00:10:08.440 25.00000% : 7790.625us 00:10:08.440 50.00000% : 8106.461us 00:10:08.440 75.00000% : 8422.297us 00:10:08.440 90.00000% : 8790.773us 00:10:08.440 95.00000% : 9633.002us 00:10:08.440 98.00000% : 16739.316us 00:10:08.440 99.00000% : 19266.005us 00:10:08.440 99.50000% : 38742.567us 00:10:08.440 99.90000% : 47796.537us 00:10:08.440 99.99000% : 48007.094us 00:10:08.440 99.99900% : 48217.651us 00:10:08.440 99.99990% : 48217.651us 00:10:08.440 99.99999% : 48217.651us 00:10:08.440 00:10:08.440 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:08.440 ================================================================================= 00:10:08.440 1.00000% : 7158.953us 00:10:08.440 10.00000% : 7527.428us 00:10:08.440 25.00000% : 7790.625us 00:10:08.440 50.00000% : 8106.461us 00:10:08.440 75.00000% : 8422.297us 00:10:08.440 90.00000% : 8843.412us 00:10:08.440 95.00000% : 10001.478us 00:10:08.440 98.00000% : 16423.480us 00:10:08.440 99.00000% : 19160.726us 00:10:08.440 99.50000% : 29267.483us 00:10:08.440 99.90000% : 38110.895us 00:10:08.440 99.99000% : 38532.010us 00:10:08.440 99.99900% : 38532.010us 00:10:08.440 99.99990% : 38532.010us 00:10:08.440 99.99999% : 38532.010us 00:10:08.440 00:10:08.440 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:08.440 ============================================================================== 00:10:08.440 Range in us Cumulative IO count 00:10:08.440 6474.641 - 6500.961: 0.0067% ( 1) 00:10:08.440 6553.600 - 6579.920: 0.0268% ( 3) 00:10:08.440 6579.920 - 6606.239: 0.0335% ( 1) 00:10:08.440 6606.239 - 6632.559: 0.0536% ( 3) 00:10:08.440 6632.559 - 6658.879: 0.0738% ( 3) 00:10:08.440 6658.879 - 6685.198: 0.0872% ( 2) 00:10:08.440 6685.198 - 6711.518: 0.1341% ( 7) 00:10:08.440 6711.518 - 6737.838: 0.1475% ( 2) 00:10:08.440 6737.838 - 6790.477: 0.2548% ( 16) 00:10:08.440 6790.477 - 6843.116: 0.3957% ( 21) 00:10:08.440 6843.116 - 6895.756: 0.5633% ( 25) 00:10:08.440 6895.756 - 6948.395: 0.7846% ( 33) 00:10:08.440 6948.395 - 7001.035: 1.0059% ( 33) 00:10:08.440 7001.035 - 7053.674: 1.3814% ( 56) 00:10:08.440 7053.674 - 7106.313: 1.7234% ( 51) 00:10:08.440 7106.313 - 7158.953: 2.3069% ( 87) 00:10:08.440 7158.953 - 7211.592: 2.9842% ( 101) 00:10:08.440 7211.592 - 7264.231: 4.0571% ( 160) 00:10:08.440 7264.231 - 7316.871: 5.0161% ( 143) 00:10:08.440 7316.871 - 7369.510: 6.7261% ( 255) 00:10:08.440 7369.510 - 7422.149: 8.1277% ( 209) 00:10:08.440 7422.149 - 7474.789: 9.7103% ( 236) 00:10:08.440 7474.789 - 7527.428: 11.6081% ( 283) 00:10:08.440 7527.428 - 7580.067: 13.9284% ( 346) 00:10:08.440 7580.067 - 7632.707: 16.4096% ( 370) 00:10:08.440 7632.707 - 7685.346: 19.5480% ( 468) 00:10:08.440 7685.346 - 7737.986: 22.8809% ( 497) 00:10:08.440 7737.986 - 7790.625: 26.1803% ( 492) 00:10:08.440 7790.625 - 7843.264: 30.1368% ( 590) 00:10:08.440 7843.264 - 7895.904: 34.5158% ( 653) 00:10:08.440 7895.904 - 7948.543: 38.8747% ( 650) 00:10:08.440 7948.543 - 8001.182: 42.8648% ( 595) 00:10:08.440 8001.182 - 8053.822: 47.4048% ( 677) 00:10:08.440 8053.822 - 8106.461: 52.4209% ( 748) 00:10:08.440 8106.461 - 8159.100: 56.8938% ( 667) 00:10:08.440 8159.100 - 8211.740: 61.1789% ( 639) 00:10:08.440 8211.740 - 8264.379: 65.0416% ( 576) 00:10:08.440 8264.379 - 8317.018: 68.3812% ( 498) 00:10:08.440 8317.018 - 8369.658: 71.4861% ( 463) 00:10:08.440 8369.658 - 8422.297: 74.6245% ( 468) 00:10:08.440 8422.297 - 8474.937: 77.7763% ( 470) 00:10:08.440 8474.937 - 8527.576: 80.3648% ( 386) 00:10:08.440 8527.576 - 8580.215: 83.1277% ( 412) 00:10:08.440 8580.215 - 8632.855: 85.3943% ( 338) 00:10:08.440 8632.855 - 8685.494: 87.1312% ( 259) 00:10:08.440 8685.494 - 8738.133: 88.5193% ( 207) 00:10:08.440 8738.133 - 8790.773: 89.5722% ( 157) 00:10:08.440 8790.773 - 8843.412: 90.2897% ( 107) 00:10:08.440 8843.412 - 8896.051: 90.8932% ( 90) 00:10:08.440 8896.051 - 8948.691: 91.3224% ( 64) 00:10:08.440 8948.691 - 9001.330: 91.7315% ( 61) 00:10:08.440 9001.330 - 9053.969: 92.1942% ( 69) 00:10:08.440 9053.969 - 9106.609: 92.5496% ( 53) 00:10:08.440 9106.609 - 9159.248: 92.7843% ( 35) 00:10:08.440 9159.248 - 9211.888: 93.0392% ( 38) 00:10:08.440 9211.888 - 9264.527: 93.3074% ( 40) 00:10:08.440 9264.527 - 9317.166: 93.7098% ( 60) 00:10:08.440 9317.166 - 9369.806: 93.9981% ( 43) 00:10:08.440 9369.806 - 9422.445: 94.1658% ( 25) 00:10:08.440 9422.445 - 9475.084: 94.4005% ( 35) 00:10:08.440 9475.084 - 9527.724: 94.4810% ( 12) 00:10:08.440 9527.724 - 9580.363: 94.5614% ( 12) 00:10:08.440 9580.363 - 9633.002: 94.6888% ( 19) 00:10:08.440 9633.002 - 9685.642: 94.9370% ( 37) 00:10:08.440 9685.642 - 9738.281: 95.0979% ( 24) 00:10:08.440 9738.281 - 9790.920: 95.1717% ( 11) 00:10:08.440 9790.920 - 9843.560: 95.2253% ( 8) 00:10:08.440 9843.560 - 9896.199: 95.3393% ( 17) 00:10:08.440 9896.199 - 9948.839: 95.4533% ( 17) 00:10:08.440 9948.839 - 10001.478: 95.7283% ( 41) 00:10:08.440 10001.478 - 10054.117: 95.8490% ( 18) 00:10:08.440 10054.117 - 10106.757: 95.9026% ( 8) 00:10:08.440 10106.757 - 10159.396: 95.9496% ( 7) 00:10:08.440 10159.396 - 10212.035: 96.0032% ( 8) 00:10:08.440 10212.035 - 10264.675: 96.0569% ( 8) 00:10:08.440 10264.675 - 10317.314: 96.1105% ( 8) 00:10:08.440 10317.314 - 10369.953: 96.1508% ( 6) 00:10:08.440 10369.953 - 10422.593: 96.1843% ( 5) 00:10:08.440 10422.593 - 10475.232: 96.2111% ( 4) 00:10:08.440 10475.232 - 10527.871: 96.2312% ( 3) 00:10:08.440 10527.871 - 10580.511: 96.2648% ( 5) 00:10:08.440 10580.511 - 10633.150: 96.3251% ( 9) 00:10:08.440 10633.150 - 10685.790: 96.3788% ( 8) 00:10:08.440 10685.790 - 10738.429: 96.4391% ( 9) 00:10:08.440 10738.429 - 10791.068: 96.4793% ( 6) 00:10:08.440 10791.068 - 10843.708: 96.5196% ( 6) 00:10:08.440 10843.708 - 10896.347: 96.5665% ( 7) 00:10:08.440 10896.347 - 10948.986: 96.5732% ( 1) 00:10:08.440 11001.626 - 11054.265: 96.5866% ( 2) 00:10:08.440 11054.265 - 11106.904: 96.6001% ( 2) 00:10:08.440 11106.904 - 11159.544: 96.6671% ( 10) 00:10:08.440 11159.544 - 11212.183: 96.7543% ( 13) 00:10:08.440 11212.183 - 11264.822: 96.8012% ( 7) 00:10:08.440 11264.822 - 11317.462: 96.8146% ( 2) 00:10:08.440 11317.462 - 11370.101: 96.8281% ( 2) 00:10:08.440 11370.101 - 11422.741: 96.8415% ( 2) 00:10:08.440 11422.741 - 11475.380: 96.8616% ( 3) 00:10:08.440 11475.380 - 11528.019: 96.8817% ( 3) 00:10:08.440 11528.019 - 11580.659: 96.8951% ( 2) 00:10:08.440 11580.659 - 11633.298: 96.9152% ( 3) 00:10:08.440 11633.298 - 11685.937: 96.9286% ( 2) 00:10:08.440 11738.577 - 11791.216: 96.9354% ( 1) 00:10:08.440 11843.855 - 11896.495: 96.9555% ( 3) 00:10:08.440 12001.773 - 12054.413: 96.9622% ( 1) 00:10:08.440 12054.413 - 12107.052: 97.0024% ( 6) 00:10:08.440 12422.888 - 12475.528: 97.0494% ( 7) 00:10:08.440 12475.528 - 12528.167: 97.1231% ( 11) 00:10:08.440 12528.167 - 12580.806: 97.1567% ( 5) 00:10:08.440 12580.806 - 12633.446: 97.1768% ( 3) 00:10:08.440 12633.446 - 12686.085: 97.1902% ( 2) 00:10:08.440 12686.085 - 12738.724: 97.2036% ( 2) 00:10:08.440 12738.724 - 12791.364: 97.2304% ( 4) 00:10:08.440 12844.003 - 12896.643: 97.2371% ( 1) 00:10:08.440 13054.561 - 13107.200: 97.2572% ( 3) 00:10:08.440 13107.200 - 13159.839: 97.2639% ( 1) 00:10:08.440 13159.839 - 13212.479: 97.2908% ( 4) 00:10:08.440 13212.479 - 13265.118: 97.3042% ( 2) 00:10:08.440 13265.118 - 13317.757: 97.3377% ( 5) 00:10:08.440 13475.676 - 13580.954: 97.3444% ( 1) 00:10:08.440 13580.954 - 13686.233: 97.3712% ( 4) 00:10:08.440 13686.233 - 13791.512: 97.3847% ( 2) 00:10:08.440 13791.512 - 13896.790: 97.4115% ( 4) 00:10:08.440 13896.790 - 14002.069: 97.4249% ( 2) 00:10:08.440 15160.135 - 15265.414: 97.5054% ( 12) 00:10:08.440 15265.414 - 15370.692: 97.5456% ( 6) 00:10:08.440 15370.692 - 15475.971: 97.5858% ( 6) 00:10:08.440 15475.971 - 15581.250: 97.5925% ( 1) 00:10:08.440 15581.250 - 15686.529: 97.6127% ( 3) 00:10:08.440 15686.529 - 15791.807: 97.6462% ( 5) 00:10:08.440 15791.807 - 15897.086: 97.6663% ( 3) 00:10:08.440 15897.086 - 16002.365: 97.6797% ( 2) 00:10:08.440 16002.365 - 16107.643: 97.6998% ( 3) 00:10:08.440 16107.643 - 16212.922: 97.7267% ( 4) 00:10:08.440 16212.922 - 16318.201: 97.7535% ( 4) 00:10:08.440 16318.201 - 16423.480: 97.7803% ( 4) 00:10:08.440 16423.480 - 16528.758: 97.8205% ( 6) 00:10:08.440 16528.758 - 16634.037: 97.9681% ( 22) 00:10:08.440 16634.037 - 16739.316: 98.0553% ( 13) 00:10:08.440 16739.316 - 16844.594: 98.0754% ( 3) 00:10:08.440 16844.594 - 16949.873: 98.1089% ( 5) 00:10:08.440 16949.873 - 17055.152: 98.1223% ( 2) 00:10:08.440 17055.152 - 17160.431: 98.1693% ( 7) 00:10:08.440 17160.431 - 17265.709: 98.2028% ( 5) 00:10:08.440 17265.709 - 17370.988: 98.2363% ( 5) 00:10:08.440 17370.988 - 17476.267: 98.2698% ( 5) 00:10:08.440 17476.267 - 17581.545: 98.2833% ( 2) 00:10:08.440 17792.103 - 17897.382: 98.3369% ( 8) 00:10:08.440 17897.382 - 18002.660: 98.3771% ( 6) 00:10:08.440 18002.660 - 18107.939: 98.3973% ( 3) 00:10:08.440 18107.939 - 18213.218: 98.4174% ( 3) 00:10:08.440 18213.218 - 18318.496: 98.4509% ( 5) 00:10:08.440 18318.496 - 18423.775: 98.4710% ( 3) 00:10:08.440 18423.775 - 18529.054: 98.5448% ( 11) 00:10:08.440 18529.054 - 18634.333: 98.6588% ( 17) 00:10:08.440 18634.333 - 18739.611: 98.7728% ( 17) 00:10:08.440 18739.611 - 18844.890: 98.8533% ( 12) 00:10:08.440 18844.890 - 18950.169: 98.9136% ( 9) 00:10:08.440 18950.169 - 19055.447: 98.9472% ( 5) 00:10:08.440 19055.447 - 19160.726: 99.0008% ( 8) 00:10:08.441 19160.726 - 19266.005: 99.0545% ( 8) 00:10:08.441 19266.005 - 19371.284: 99.0813% ( 4) 00:10:08.441 19371.284 - 19476.562: 99.1282% ( 7) 00:10:08.441 19476.562 - 19581.841: 99.1416% ( 2) 00:10:08.441 41479.814 - 41690.371: 99.1483% ( 1) 00:10:08.441 41690.371 - 41900.929: 99.1953% ( 7) 00:10:08.441 41900.929 - 42111.486: 99.2422% ( 7) 00:10:08.441 42111.486 - 42322.043: 99.2959% ( 8) 00:10:08.441 42322.043 - 42532.601: 99.3428% ( 7) 00:10:08.441 42532.601 - 42743.158: 99.3965% ( 8) 00:10:08.441 42743.158 - 42953.716: 99.4501% ( 8) 00:10:08.441 42953.716 - 43164.273: 99.4836% ( 5) 00:10:08.441 43164.273 - 43374.831: 99.5440% ( 9) 00:10:08.441 43374.831 - 43585.388: 99.5708% ( 4) 00:10:08.441 50954.898 - 51165.455: 99.5775% ( 1) 00:10:08.441 51165.455 - 51376.013: 99.6312% ( 8) 00:10:08.441 51376.013 - 51586.570: 99.6714% ( 6) 00:10:08.441 51586.570 - 51797.128: 99.7251% ( 8) 00:10:08.441 51797.128 - 52007.685: 99.7653% ( 6) 00:10:08.441 52007.685 - 52218.243: 99.8122% ( 7) 00:10:08.441 52218.243 - 52428.800: 99.8592% ( 7) 00:10:08.441 52428.800 - 52639.357: 99.9128% ( 8) 00:10:08.441 52639.357 - 52849.915: 99.9598% ( 7) 00:10:08.441 52849.915 - 53060.472: 100.0000% ( 6) 00:10:08.441 00:10:08.441 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:08.441 ============================================================================== 00:10:08.441 Range in us Cumulative IO count 00:10:08.441 6579.920 - 6606.239: 0.0067% ( 1) 00:10:08.441 6658.879 - 6685.198: 0.0134% ( 1) 00:10:08.441 6685.198 - 6711.518: 0.0335% ( 3) 00:10:08.441 6711.518 - 6737.838: 0.0604% ( 4) 00:10:08.441 6737.838 - 6790.477: 0.0939% ( 5) 00:10:08.441 6790.477 - 6843.116: 0.1609% ( 10) 00:10:08.441 6843.116 - 6895.756: 0.3152% ( 23) 00:10:08.441 6895.756 - 6948.395: 0.5700% ( 38) 00:10:08.441 6948.395 - 7001.035: 0.7444% ( 26) 00:10:08.441 7001.035 - 7053.674: 1.1870% ( 66) 00:10:08.441 7053.674 - 7106.313: 1.5491% ( 54) 00:10:08.441 7106.313 - 7158.953: 1.6899% ( 21) 00:10:08.441 7158.953 - 7211.592: 1.9514% ( 39) 00:10:08.441 7211.592 - 7264.231: 2.5013% ( 82) 00:10:08.441 7264.231 - 7316.871: 3.6615% ( 173) 00:10:08.441 7316.871 - 7369.510: 5.2039% ( 230) 00:10:08.441 7369.510 - 7422.149: 6.5451% ( 200) 00:10:08.441 7422.149 - 7474.789: 7.9936% ( 216) 00:10:08.441 7474.789 - 7527.428: 9.7304% ( 259) 00:10:08.441 7527.428 - 7580.067: 12.1580% ( 362) 00:10:08.441 7580.067 - 7632.707: 14.9410% ( 415) 00:10:08.441 7632.707 - 7685.346: 18.2873% ( 499) 00:10:08.441 7685.346 - 7737.986: 22.1567% ( 577) 00:10:08.441 7737.986 - 7790.625: 25.4493% ( 491) 00:10:08.441 7790.625 - 7843.264: 29.0370% ( 535) 00:10:08.441 7843.264 - 7895.904: 33.1746% ( 617) 00:10:08.441 7895.904 - 7948.543: 38.0834% ( 732) 00:10:08.441 7948.543 - 8001.182: 42.5429% ( 665) 00:10:08.441 8001.182 - 8053.822: 47.6127% ( 756) 00:10:08.441 8053.822 - 8106.461: 52.0789% ( 666) 00:10:08.441 8106.461 - 8159.100: 56.3104% ( 631) 00:10:08.441 8159.100 - 8211.740: 61.3130% ( 746) 00:10:08.441 8211.740 - 8264.379: 65.6921% ( 653) 00:10:08.441 8264.379 - 8317.018: 70.0174% ( 645) 00:10:08.441 8317.018 - 8369.658: 73.6789% ( 546) 00:10:08.441 8369.658 - 8422.297: 76.6363% ( 441) 00:10:08.441 8422.297 - 8474.937: 80.1100% ( 518) 00:10:08.441 8474.937 - 8527.576: 82.7521% ( 394) 00:10:08.441 8527.576 - 8580.215: 84.8981% ( 320) 00:10:08.441 8580.215 - 8632.855: 86.6215% ( 257) 00:10:08.441 8632.855 - 8685.494: 88.3584% ( 259) 00:10:08.441 8685.494 - 8738.133: 89.9745% ( 241) 00:10:08.441 8738.133 - 8790.773: 91.0139% ( 155) 00:10:08.441 8790.773 - 8843.412: 91.6778% ( 99) 00:10:08.441 8843.412 - 8896.051: 92.1607% ( 72) 00:10:08.441 8896.051 - 8948.691: 92.4960% ( 50) 00:10:08.441 8948.691 - 9001.330: 92.7843% ( 43) 00:10:08.441 9001.330 - 9053.969: 93.0459% ( 39) 00:10:08.441 9053.969 - 9106.609: 93.4549% ( 61) 00:10:08.441 9106.609 - 9159.248: 93.7165% ( 39) 00:10:08.441 9159.248 - 9211.888: 93.9713% ( 38) 00:10:08.441 9211.888 - 9264.527: 94.2127% ( 36) 00:10:08.441 9264.527 - 9317.166: 94.3401% ( 19) 00:10:08.441 9317.166 - 9369.806: 94.4675% ( 19) 00:10:08.441 9369.806 - 9422.445: 94.5883% ( 18) 00:10:08.441 9422.445 - 9475.084: 94.6754% ( 13) 00:10:08.441 9475.084 - 9527.724: 94.8699% ( 29) 00:10:08.441 9527.724 - 9580.363: 94.9236% ( 8) 00:10:08.441 9580.363 - 9633.002: 94.9705% ( 7) 00:10:08.441 9633.002 - 9685.642: 95.0107% ( 6) 00:10:08.441 9685.642 - 9738.281: 95.0510% ( 6) 00:10:08.441 9738.281 - 9790.920: 95.0979% ( 7) 00:10:08.441 9790.920 - 9843.560: 95.1650% ( 10) 00:10:08.441 9843.560 - 9896.199: 95.2656% ( 15) 00:10:08.441 9896.199 - 9948.839: 95.3997% ( 20) 00:10:08.441 9948.839 - 10001.478: 95.4734% ( 11) 00:10:08.441 10001.478 - 10054.117: 95.6880% ( 32) 00:10:08.441 10054.117 - 10106.757: 95.7417% ( 8) 00:10:08.441 10106.757 - 10159.396: 95.8020% ( 9) 00:10:08.441 10159.396 - 10212.035: 95.8222% ( 3) 00:10:08.441 10212.035 - 10264.675: 95.8289% ( 1) 00:10:08.441 10527.871 - 10580.511: 95.8356% ( 1) 00:10:08.441 10633.150 - 10685.790: 95.8557% ( 3) 00:10:08.441 10685.790 - 10738.429: 95.8691% ( 2) 00:10:08.441 10738.429 - 10791.068: 95.9026% ( 5) 00:10:08.441 10791.068 - 10843.708: 95.9764% ( 11) 00:10:08.441 10843.708 - 10896.347: 96.0300% ( 8) 00:10:08.441 10896.347 - 10948.986: 96.0770% ( 7) 00:10:08.441 10948.986 - 11001.626: 96.1440% ( 10) 00:10:08.441 11001.626 - 11054.265: 96.2044% ( 9) 00:10:08.441 11054.265 - 11106.904: 96.3452% ( 21) 00:10:08.441 11106.904 - 11159.544: 96.4726% ( 19) 00:10:08.441 11159.544 - 11212.183: 96.6336% ( 24) 00:10:08.441 11212.183 - 11264.822: 96.6604% ( 4) 00:10:08.441 11264.822 - 11317.462: 96.7006% ( 6) 00:10:08.441 11317.462 - 11370.101: 96.7208% ( 3) 00:10:08.441 11370.101 - 11422.741: 96.7409% ( 3) 00:10:08.441 11422.741 - 11475.380: 96.7543% ( 2) 00:10:08.441 11475.380 - 11528.019: 96.7744% ( 3) 00:10:08.441 11528.019 - 11580.659: 96.7945% ( 3) 00:10:08.441 11580.659 - 11633.298: 96.8079% ( 2) 00:10:08.441 11633.298 - 11685.937: 96.8214% ( 2) 00:10:08.441 11685.937 - 11738.577: 96.8415% ( 3) 00:10:08.441 11738.577 - 11791.216: 96.8817% ( 6) 00:10:08.441 11791.216 - 11843.855: 96.9286% ( 7) 00:10:08.441 11843.855 - 11896.495: 96.9488% ( 3) 00:10:08.441 11896.495 - 11949.134: 96.9689% ( 3) 00:10:08.441 11949.134 - 12001.773: 96.9823% ( 2) 00:10:08.441 12001.773 - 12054.413: 96.9957% ( 2) 00:10:08.441 12264.970 - 12317.610: 97.0158% ( 3) 00:10:08.441 12317.610 - 12370.249: 97.0427% ( 4) 00:10:08.441 12370.249 - 12422.888: 97.0695% ( 4) 00:10:08.441 12422.888 - 12475.528: 97.0829% ( 2) 00:10:08.441 12475.528 - 12528.167: 97.1164% ( 5) 00:10:08.441 12528.167 - 12580.806: 97.1365% ( 3) 00:10:08.441 12580.806 - 12633.446: 97.1634% ( 4) 00:10:08.441 12633.446 - 12686.085: 97.1768% ( 2) 00:10:08.441 12686.085 - 12738.724: 97.2505% ( 11) 00:10:08.441 12738.724 - 12791.364: 97.2841% ( 5) 00:10:08.441 12791.364 - 12844.003: 97.2908% ( 1) 00:10:08.441 12844.003 - 12896.643: 97.3042% ( 2) 00:10:08.441 12896.643 - 12949.282: 97.3109% ( 1) 00:10:08.441 12949.282 - 13001.921: 97.3243% ( 2) 00:10:08.441 13001.921 - 13054.561: 97.3310% ( 1) 00:10:08.441 13054.561 - 13107.200: 97.3377% ( 1) 00:10:08.441 13107.200 - 13159.839: 97.3511% ( 2) 00:10:08.441 13159.839 - 13212.479: 97.3645% ( 2) 00:10:08.441 13212.479 - 13265.118: 97.3712% ( 1) 00:10:08.441 13265.118 - 13317.757: 97.3847% ( 2) 00:10:08.441 13317.757 - 13370.397: 97.3981% ( 2) 00:10:08.441 13370.397 - 13423.036: 97.4048% ( 1) 00:10:08.441 13423.036 - 13475.676: 97.4182% ( 2) 00:10:08.441 13475.676 - 13580.954: 97.4249% ( 1) 00:10:08.441 14949.578 - 15054.856: 97.4383% ( 2) 00:10:08.441 15054.856 - 15160.135: 97.4852% ( 7) 00:10:08.441 15160.135 - 15265.414: 97.5322% ( 7) 00:10:08.441 15265.414 - 15370.692: 97.5992% ( 10) 00:10:08.441 15370.692 - 15475.971: 97.7200% ( 18) 00:10:08.441 15475.971 - 15581.250: 97.7736% ( 8) 00:10:08.441 15581.250 - 15686.529: 97.8004% ( 4) 00:10:08.441 15686.529 - 15791.807: 97.8273% ( 4) 00:10:08.441 15791.807 - 15897.086: 97.8541% ( 4) 00:10:08.441 16528.758 - 16634.037: 97.9010% ( 7) 00:10:08.441 16634.037 - 16739.316: 97.9815% ( 12) 00:10:08.441 16739.316 - 16844.594: 98.0888% ( 16) 00:10:08.441 16844.594 - 16949.873: 98.1760% ( 13) 00:10:08.441 16949.873 - 17055.152: 98.2564% ( 12) 00:10:08.441 17055.152 - 17160.431: 98.2833% ( 4) 00:10:08.441 17792.103 - 17897.382: 98.3101% ( 4) 00:10:08.441 17897.382 - 18002.660: 98.3570% ( 7) 00:10:08.441 18002.660 - 18107.939: 98.3973% ( 6) 00:10:08.441 18107.939 - 18213.218: 98.4375% ( 6) 00:10:08.441 18213.218 - 18318.496: 98.4911% ( 8) 00:10:08.441 18318.496 - 18423.775: 98.5381% ( 7) 00:10:08.441 18423.775 - 18529.054: 98.5783% ( 6) 00:10:08.441 18529.054 - 18634.333: 98.6186% ( 6) 00:10:08.441 18634.333 - 18739.611: 98.6454% ( 4) 00:10:08.441 18739.611 - 18844.890: 98.6722% ( 4) 00:10:08.441 18844.890 - 18950.169: 98.7594% ( 13) 00:10:08.441 18950.169 - 19055.447: 98.8264% ( 10) 00:10:08.441 19055.447 - 19160.726: 98.9203% ( 14) 00:10:08.441 19160.726 - 19266.005: 99.0209% ( 15) 00:10:08.441 19266.005 - 19371.284: 99.0947% ( 11) 00:10:08.441 19371.284 - 19476.562: 99.1416% ( 7) 00:10:08.441 40427.027 - 40637.584: 99.1886% ( 7) 00:10:08.441 40637.584 - 40848.141: 99.2422% ( 8) 00:10:08.441 40848.141 - 41058.699: 99.2959% ( 8) 00:10:08.441 41058.699 - 41269.256: 99.3495% ( 8) 00:10:08.442 41269.256 - 41479.814: 99.3965% ( 7) 00:10:08.442 41479.814 - 41690.371: 99.4501% ( 8) 00:10:08.442 41690.371 - 41900.929: 99.4970% ( 7) 00:10:08.442 41900.929 - 42111.486: 99.5440% ( 7) 00:10:08.442 42111.486 - 42322.043: 99.5708% ( 4) 00:10:08.442 49480.996 - 49691.553: 99.6111% ( 6) 00:10:08.442 49691.553 - 49902.111: 99.6714% ( 9) 00:10:08.442 49902.111 - 50112.668: 99.7116% ( 6) 00:10:08.442 50112.668 - 50323.226: 99.7653% ( 8) 00:10:08.442 50323.226 - 50533.783: 99.8256% ( 9) 00:10:08.442 50533.783 - 50744.341: 99.8726% ( 7) 00:10:08.442 50744.341 - 50954.898: 99.9329% ( 9) 00:10:08.442 50954.898 - 51165.455: 99.9799% ( 7) 00:10:08.442 51165.455 - 51376.013: 100.0000% ( 3) 00:10:08.442 00:10:08.442 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:08.442 ============================================================================== 00:10:08.442 Range in us Cumulative IO count 00:10:08.442 6606.239 - 6632.559: 0.0067% ( 1) 00:10:08.442 6632.559 - 6658.879: 0.0134% ( 1) 00:10:08.442 6658.879 - 6685.198: 0.0201% ( 1) 00:10:08.442 6737.838 - 6790.477: 0.0738% ( 8) 00:10:08.442 6790.477 - 6843.116: 0.2012% ( 19) 00:10:08.442 6843.116 - 6895.756: 0.4024% ( 30) 00:10:08.442 6895.756 - 6948.395: 0.8584% ( 68) 00:10:08.442 6948.395 - 7001.035: 1.0730% ( 32) 00:10:08.442 7001.035 - 7053.674: 1.3948% ( 48) 00:10:08.442 7053.674 - 7106.313: 1.6430% ( 37) 00:10:08.442 7106.313 - 7158.953: 1.9783% ( 50) 00:10:08.442 7158.953 - 7211.592: 2.4410% ( 69) 00:10:08.442 7211.592 - 7264.231: 3.2859% ( 126) 00:10:08.442 7264.231 - 7316.871: 4.3857% ( 164) 00:10:08.442 7316.871 - 7369.510: 6.0220% ( 244) 00:10:08.442 7369.510 - 7422.149: 7.8863% ( 278) 00:10:08.442 7422.149 - 7474.789: 9.8981% ( 300) 00:10:08.442 7474.789 - 7527.428: 12.1982% ( 343) 00:10:08.442 7527.428 - 7580.067: 14.5587% ( 352) 00:10:08.442 7580.067 - 7632.707: 17.4356% ( 429) 00:10:08.442 7632.707 - 7685.346: 20.1381% ( 403) 00:10:08.442 7685.346 - 7737.986: 23.5247% ( 505) 00:10:08.442 7737.986 - 7790.625: 26.7234% ( 477) 00:10:08.442 7790.625 - 7843.264: 30.6867% ( 591) 00:10:08.442 7843.264 - 7895.904: 34.3348% ( 544) 00:10:08.442 7895.904 - 7948.543: 38.2377% ( 582) 00:10:08.442 7948.543 - 8001.182: 43.3342% ( 760) 00:10:08.442 8001.182 - 8053.822: 47.6998% ( 651) 00:10:08.442 8053.822 - 8106.461: 51.8240% ( 615) 00:10:08.442 8106.461 - 8159.100: 55.6398% ( 569) 00:10:08.442 8159.100 - 8211.740: 59.6969% ( 605) 00:10:08.442 8211.740 - 8264.379: 63.9753% ( 638) 00:10:08.442 8264.379 - 8317.018: 67.7843% ( 568) 00:10:08.442 8317.018 - 8369.658: 71.5732% ( 565) 00:10:08.442 8369.658 - 8422.297: 74.9598% ( 505) 00:10:08.442 8422.297 - 8474.937: 78.1854% ( 481) 00:10:08.442 8474.937 - 8527.576: 80.6465% ( 367) 00:10:08.442 8527.576 - 8580.215: 82.7589% ( 315) 00:10:08.442 8580.215 - 8632.855: 84.8914% ( 318) 00:10:08.442 8632.855 - 8685.494: 86.8093% ( 286) 00:10:08.442 8685.494 - 8738.133: 88.1438% ( 199) 00:10:08.442 8738.133 - 8790.773: 89.4850% ( 200) 00:10:08.442 8790.773 - 8843.412: 90.9335% ( 216) 00:10:08.442 8843.412 - 8896.051: 91.6175% ( 102) 00:10:08.442 8896.051 - 8948.691: 92.2613% ( 96) 00:10:08.442 8948.691 - 9001.330: 92.7508% ( 73) 00:10:08.442 9001.330 - 9053.969: 93.0526% ( 45) 00:10:08.442 9053.969 - 9106.609: 93.3208% ( 40) 00:10:08.442 9106.609 - 9159.248: 93.6226% ( 45) 00:10:08.442 9159.248 - 9211.888: 93.8908% ( 40) 00:10:08.442 9211.888 - 9264.527: 94.1121% ( 33) 00:10:08.442 9264.527 - 9317.166: 94.2530% ( 21) 00:10:08.442 9317.166 - 9369.806: 94.3737% ( 18) 00:10:08.442 9369.806 - 9422.445: 94.4877% ( 17) 00:10:08.442 9422.445 - 9475.084: 94.5681% ( 12) 00:10:08.442 9475.084 - 9527.724: 94.7425% ( 26) 00:10:08.442 9527.724 - 9580.363: 95.0845% ( 51) 00:10:08.442 9580.363 - 9633.002: 95.1985% ( 17) 00:10:08.442 9633.002 - 9685.642: 95.2857% ( 13) 00:10:08.442 9685.642 - 9738.281: 95.3661% ( 12) 00:10:08.442 9738.281 - 9790.920: 95.4667% ( 15) 00:10:08.442 9790.920 - 9843.560: 95.5807% ( 17) 00:10:08.442 9843.560 - 9896.199: 95.6277% ( 7) 00:10:08.442 9896.199 - 9948.839: 95.6612% ( 5) 00:10:08.442 9948.839 - 10001.478: 95.7014% ( 6) 00:10:08.442 10001.478 - 10054.117: 95.7082% ( 1) 00:10:08.442 10264.675 - 10317.314: 95.7216% ( 2) 00:10:08.442 10317.314 - 10369.953: 95.7484% ( 4) 00:10:08.442 10369.953 - 10422.593: 95.7752% ( 4) 00:10:08.442 10422.593 - 10475.232: 95.8020% ( 4) 00:10:08.442 10475.232 - 10527.871: 95.8222% ( 3) 00:10:08.442 10527.871 - 10580.511: 95.8758% ( 8) 00:10:08.442 10580.511 - 10633.150: 95.9697% ( 14) 00:10:08.442 10633.150 - 10685.790: 95.9898% ( 3) 00:10:08.442 10685.790 - 10738.429: 96.0166% ( 4) 00:10:08.442 10738.429 - 10791.068: 96.0569% ( 6) 00:10:08.442 10791.068 - 10843.708: 96.1172% ( 9) 00:10:08.442 10843.708 - 10896.347: 96.1709% ( 8) 00:10:08.442 10896.347 - 10948.986: 96.2245% ( 8) 00:10:08.442 10948.986 - 11001.626: 96.3050% ( 12) 00:10:08.442 11001.626 - 11054.265: 96.4056% ( 15) 00:10:08.442 11054.265 - 11106.904: 96.5598% ( 23) 00:10:08.442 11106.904 - 11159.544: 96.6805% ( 18) 00:10:08.442 11159.544 - 11212.183: 96.7610% ( 12) 00:10:08.442 11212.183 - 11264.822: 96.8415% ( 12) 00:10:08.442 11264.822 - 11317.462: 96.8750% ( 5) 00:10:08.442 11317.462 - 11370.101: 96.9018% ( 4) 00:10:08.442 11370.101 - 11422.741: 96.9354% ( 5) 00:10:08.442 11422.741 - 11475.380: 96.9555% ( 3) 00:10:08.442 11475.380 - 11528.019: 96.9756% ( 3) 00:10:08.442 11528.019 - 11580.659: 96.9890% ( 2) 00:10:08.442 11580.659 - 11633.298: 96.9957% ( 1) 00:10:08.442 12001.773 - 12054.413: 97.0158% ( 3) 00:10:08.442 12054.413 - 12107.052: 97.0628% ( 7) 00:10:08.442 12107.052 - 12159.692: 97.1164% ( 8) 00:10:08.442 12159.692 - 12212.331: 97.1701% ( 8) 00:10:08.442 12212.331 - 12264.970: 97.2438% ( 11) 00:10:08.442 12264.970 - 12317.610: 97.2572% ( 2) 00:10:08.442 12317.610 - 12370.249: 97.2707% ( 2) 00:10:08.442 12370.249 - 12422.888: 97.2841% ( 2) 00:10:08.442 12422.888 - 12475.528: 97.2975% ( 2) 00:10:08.442 12475.528 - 12528.167: 97.3176% ( 3) 00:10:08.442 12528.167 - 12580.806: 97.3243% ( 1) 00:10:08.442 12580.806 - 12633.446: 97.3377% ( 2) 00:10:08.442 12633.446 - 12686.085: 97.3511% ( 2) 00:10:08.442 12686.085 - 12738.724: 97.3645% ( 2) 00:10:08.442 12738.724 - 12791.364: 97.3780% ( 2) 00:10:08.442 12791.364 - 12844.003: 97.3847% ( 1) 00:10:08.442 12844.003 - 12896.643: 97.3981% ( 2) 00:10:08.442 12896.643 - 12949.282: 97.4115% ( 2) 00:10:08.442 12949.282 - 13001.921: 97.4249% ( 2) 00:10:08.442 13370.397 - 13423.036: 97.4316% ( 1) 00:10:08.442 13580.954 - 13686.233: 97.4785% ( 7) 00:10:08.442 13686.233 - 13791.512: 97.5456% ( 10) 00:10:08.442 13791.512 - 13896.790: 97.6328% ( 13) 00:10:08.442 13896.790 - 14002.069: 97.7401% ( 16) 00:10:08.442 14002.069 - 14107.348: 97.7803% ( 6) 00:10:08.442 14107.348 - 14212.627: 97.8004% ( 3) 00:10:08.442 14212.627 - 14317.905: 97.8205% ( 3) 00:10:08.442 14317.905 - 14423.184: 97.8407% ( 3) 00:10:08.442 14423.184 - 14528.463: 97.8541% ( 2) 00:10:08.442 16844.594 - 16949.873: 97.8608% ( 1) 00:10:08.442 16949.873 - 17055.152: 97.9211% ( 9) 00:10:08.442 17055.152 - 17160.431: 98.0016% ( 12) 00:10:08.442 17160.431 - 17265.709: 98.1156% ( 17) 00:10:08.442 17265.709 - 17370.988: 98.2162% ( 15) 00:10:08.442 17370.988 - 17476.267: 98.3704% ( 23) 00:10:08.442 17476.267 - 17581.545: 98.4844% ( 17) 00:10:08.442 17581.545 - 17686.824: 98.5314% ( 7) 00:10:08.442 17686.824 - 17792.103: 98.5850% ( 8) 00:10:08.442 17792.103 - 17897.382: 98.6186% ( 5) 00:10:08.442 17897.382 - 18002.660: 98.6320% ( 2) 00:10:08.442 18002.660 - 18107.939: 98.6454% ( 2) 00:10:08.442 18107.939 - 18213.218: 98.6655% ( 3) 00:10:08.442 18213.218 - 18318.496: 98.6923% ( 4) 00:10:08.442 18318.496 - 18423.775: 98.7124% ( 3) 00:10:08.442 19160.726 - 19266.005: 98.7393% ( 4) 00:10:08.442 19266.005 - 19371.284: 98.7862% ( 7) 00:10:08.442 19371.284 - 19476.562: 98.8466% ( 9) 00:10:08.442 19476.562 - 19581.841: 98.9002% ( 8) 00:10:08.442 19581.841 - 19687.120: 98.9405% ( 6) 00:10:08.442 19687.120 - 19792.398: 98.9539% ( 2) 00:10:08.442 19792.398 - 19897.677: 98.9740% ( 3) 00:10:08.442 19897.677 - 20002.956: 98.9941% ( 3) 00:10:08.442 20002.956 - 20108.235: 99.0209% ( 4) 00:10:08.442 20108.235 - 20213.513: 99.0477% ( 4) 00:10:08.442 20213.513 - 20318.792: 99.0813% ( 5) 00:10:08.442 20318.792 - 20424.071: 99.1014% ( 3) 00:10:08.442 20424.071 - 20529.349: 99.1349% ( 5) 00:10:08.442 20529.349 - 20634.628: 99.1416% ( 1) 00:10:08.442 40005.912 - 40216.469: 99.1819% ( 6) 00:10:08.442 40216.469 - 40427.027: 99.2355% ( 8) 00:10:08.442 40427.027 - 40637.584: 99.2892% ( 8) 00:10:08.442 40637.584 - 40848.141: 99.3361% ( 7) 00:10:08.442 40848.141 - 41058.699: 99.3898% ( 8) 00:10:08.442 41058.699 - 41269.256: 99.4434% ( 8) 00:10:08.442 41269.256 - 41479.814: 99.4970% ( 8) 00:10:08.442 41479.814 - 41690.371: 99.5507% ( 8) 00:10:08.442 41690.371 - 41900.929: 99.5708% ( 3) 00:10:08.442 48638.766 - 48849.324: 99.6111% ( 6) 00:10:08.442 48849.324 - 49059.881: 99.6647% ( 8) 00:10:08.442 49059.881 - 49270.439: 99.7049% ( 6) 00:10:08.442 49270.439 - 49480.996: 99.7519% ( 7) 00:10:08.442 49480.996 - 49691.553: 99.8055% ( 8) 00:10:08.442 49691.553 - 49902.111: 99.8525% ( 7) 00:10:08.442 49902.111 - 50112.668: 99.8994% ( 7) 00:10:08.442 50112.668 - 50323.226: 99.9464% ( 7) 00:10:08.442 50323.226 - 50533.783: 99.9933% ( 7) 00:10:08.442 50533.783 - 50744.341: 100.0000% ( 1) 00:10:08.442 00:10:08.442 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:08.442 ============================================================================== 00:10:08.442 Range in us Cumulative IO count 00:10:08.442 6711.518 - 6737.838: 0.0201% ( 3) 00:10:08.443 6737.838 - 6790.477: 0.0872% ( 10) 00:10:08.443 6790.477 - 6843.116: 0.1609% ( 11) 00:10:08.443 6843.116 - 6895.756: 0.3286% ( 25) 00:10:08.443 6895.756 - 6948.395: 0.4694% ( 21) 00:10:08.443 6948.395 - 7001.035: 0.7041% ( 35) 00:10:08.443 7001.035 - 7053.674: 1.0998% ( 59) 00:10:08.443 7053.674 - 7106.313: 1.4485% ( 52) 00:10:08.443 7106.313 - 7158.953: 1.9514% ( 75) 00:10:08.443 7158.953 - 7211.592: 2.4812% ( 79) 00:10:08.443 7211.592 - 7264.231: 3.0244% ( 81) 00:10:08.443 7264.231 - 7316.871: 3.9163% ( 133) 00:10:08.443 7316.871 - 7369.510: 5.1569% ( 185) 00:10:08.443 7369.510 - 7422.149: 6.8737% ( 256) 00:10:08.443 7422.149 - 7474.789: 8.6910% ( 271) 00:10:08.443 7474.789 - 7527.428: 10.6961% ( 299) 00:10:08.443 7527.428 - 7580.067: 13.1907% ( 372) 00:10:08.443 7580.067 - 7632.707: 15.8597% ( 398) 00:10:08.443 7632.707 - 7685.346: 18.7969% ( 438) 00:10:08.443 7685.346 - 7737.986: 22.5925% ( 566) 00:10:08.443 7737.986 - 7790.625: 26.6229% ( 601) 00:10:08.443 7790.625 - 7843.264: 30.8543% ( 631) 00:10:08.443 7843.264 - 7895.904: 34.9785% ( 615) 00:10:08.443 7895.904 - 7948.543: 39.5856% ( 687) 00:10:08.443 7948.543 - 8001.182: 43.2202% ( 542) 00:10:08.443 8001.182 - 8053.822: 46.7677% ( 529) 00:10:08.443 8053.822 - 8106.461: 50.6170% ( 574) 00:10:08.443 8106.461 - 8159.100: 55.4453% ( 720) 00:10:08.443 8159.100 - 8211.740: 59.8981% ( 664) 00:10:08.443 8211.740 - 8264.379: 64.5252% ( 690) 00:10:08.443 8264.379 - 8317.018: 68.3342% ( 568) 00:10:08.443 8317.018 - 8369.658: 72.3981% ( 606) 00:10:08.443 8369.658 - 8422.297: 75.9120% ( 524) 00:10:08.443 8422.297 - 8474.937: 79.1778% ( 487) 00:10:08.443 8474.937 - 8527.576: 82.1754% ( 447) 00:10:08.443 8527.576 - 8580.215: 84.2342% ( 307) 00:10:08.443 8580.215 - 8632.855: 85.8839% ( 246) 00:10:08.443 8632.855 - 8685.494: 87.4128% ( 228) 00:10:08.443 8685.494 - 8738.133: 88.6668% ( 187) 00:10:08.443 8738.133 - 8790.773: 89.9343% ( 189) 00:10:08.443 8790.773 - 8843.412: 90.7994% ( 129) 00:10:08.443 8843.412 - 8896.051: 91.3828% ( 87) 00:10:08.443 8896.051 - 8948.691: 91.9729% ( 88) 00:10:08.443 8948.691 - 9001.330: 92.2881% ( 47) 00:10:08.443 9001.330 - 9053.969: 92.5429% ( 38) 00:10:08.443 9053.969 - 9106.609: 92.7910% ( 37) 00:10:08.443 9106.609 - 9159.248: 93.3409% ( 82) 00:10:08.443 9159.248 - 9211.888: 93.6561% ( 47) 00:10:08.443 9211.888 - 9264.527: 94.0249% ( 55) 00:10:08.443 9264.527 - 9317.166: 94.3401% ( 47) 00:10:08.443 9317.166 - 9369.806: 94.6821% ( 51) 00:10:08.443 9369.806 - 9422.445: 94.8364% ( 23) 00:10:08.443 9422.445 - 9475.084: 94.9839% ( 22) 00:10:08.443 9475.084 - 9527.724: 95.1113% ( 19) 00:10:08.443 9527.724 - 9580.363: 95.1717% ( 9) 00:10:08.443 9580.363 - 9633.002: 95.2253% ( 8) 00:10:08.443 9633.002 - 9685.642: 95.2656% ( 6) 00:10:08.443 9685.642 - 9738.281: 95.2924% ( 4) 00:10:08.443 9738.281 - 9790.920: 95.3326% ( 6) 00:10:08.443 9790.920 - 9843.560: 95.3594% ( 4) 00:10:08.443 9843.560 - 9896.199: 95.3997% ( 6) 00:10:08.443 9896.199 - 9948.839: 95.4399% ( 6) 00:10:08.443 9948.839 - 10001.478: 95.5606% ( 18) 00:10:08.443 10001.478 - 10054.117: 95.6009% ( 6) 00:10:08.443 10054.117 - 10106.757: 95.6411% ( 6) 00:10:08.443 10106.757 - 10159.396: 95.6880% ( 7) 00:10:08.443 10159.396 - 10212.035: 95.7149% ( 4) 00:10:08.443 10212.035 - 10264.675: 95.7216% ( 1) 00:10:08.443 10369.953 - 10422.593: 95.7350% ( 2) 00:10:08.443 10422.593 - 10475.232: 95.7551% ( 3) 00:10:08.443 10475.232 - 10527.871: 95.7953% ( 6) 00:10:08.443 10527.871 - 10580.511: 95.8356% ( 6) 00:10:08.443 10580.511 - 10633.150: 95.8959% ( 9) 00:10:08.443 10633.150 - 10685.790: 96.0233% ( 19) 00:10:08.443 10685.790 - 10738.429: 96.0703% ( 7) 00:10:08.443 10738.429 - 10791.068: 96.1642% ( 14) 00:10:08.443 10791.068 - 10843.708: 96.2312% ( 10) 00:10:08.443 10843.708 - 10896.347: 96.3184% ( 13) 00:10:08.443 10896.347 - 10948.986: 96.4592% ( 21) 00:10:08.443 10948.986 - 11001.626: 96.4995% ( 6) 00:10:08.443 11001.626 - 11054.265: 96.5531% ( 8) 00:10:08.443 11054.265 - 11106.904: 96.5866% ( 5) 00:10:08.443 11106.904 - 11159.544: 96.6470% ( 9) 00:10:08.443 11159.544 - 11212.183: 96.6805% ( 5) 00:10:08.443 11212.183 - 11264.822: 96.7543% ( 11) 00:10:08.443 11264.822 - 11317.462: 96.8214% ( 10) 00:10:08.443 11317.462 - 11370.101: 96.8549% ( 5) 00:10:08.443 11370.101 - 11422.741: 96.8817% ( 4) 00:10:08.443 11422.741 - 11475.380: 96.9085% ( 4) 00:10:08.443 11475.380 - 11528.019: 96.9421% ( 5) 00:10:08.443 11528.019 - 11580.659: 96.9823% ( 6) 00:10:08.443 11580.659 - 11633.298: 97.0359% ( 8) 00:10:08.443 11633.298 - 11685.937: 97.1030% ( 10) 00:10:08.443 11685.937 - 11738.577: 97.1634% ( 9) 00:10:08.443 11738.577 - 11791.216: 97.2438% ( 12) 00:10:08.443 11791.216 - 11843.855: 97.2774% ( 5) 00:10:08.443 11843.855 - 11896.495: 97.3042% ( 4) 00:10:08.443 11896.495 - 11949.134: 97.3377% ( 5) 00:10:08.443 11949.134 - 12001.773: 97.3578% ( 3) 00:10:08.443 12001.773 - 12054.413: 97.3712% ( 2) 00:10:08.443 12054.413 - 12107.052: 97.3914% ( 3) 00:10:08.443 12107.052 - 12159.692: 97.4048% ( 2) 00:10:08.443 12159.692 - 12212.331: 97.4182% ( 2) 00:10:08.443 12212.331 - 12264.970: 97.4249% ( 1) 00:10:08.443 13423.036 - 13475.676: 97.4383% ( 2) 00:10:08.443 13475.676 - 13580.954: 97.4852% ( 7) 00:10:08.443 13580.954 - 13686.233: 97.6261% ( 21) 00:10:08.443 13686.233 - 13791.512: 97.7535% ( 19) 00:10:08.443 13791.512 - 13896.790: 97.7736% ( 3) 00:10:08.443 13896.790 - 14002.069: 97.7937% ( 3) 00:10:08.443 14002.069 - 14107.348: 97.8138% ( 3) 00:10:08.443 14107.348 - 14212.627: 97.8340% ( 3) 00:10:08.443 14212.627 - 14317.905: 97.8541% ( 3) 00:10:08.443 16528.758 - 16634.037: 97.8608% ( 1) 00:10:08.443 16739.316 - 16844.594: 97.8742% ( 2) 00:10:08.443 16844.594 - 16949.873: 97.9144% ( 6) 00:10:08.443 16949.873 - 17055.152: 97.9681% ( 8) 00:10:08.443 17055.152 - 17160.431: 98.0150% ( 7) 00:10:08.443 17160.431 - 17265.709: 98.0687% ( 8) 00:10:08.443 17265.709 - 17370.988: 98.1223% ( 8) 00:10:08.443 17370.988 - 17476.267: 98.1357% ( 2) 00:10:08.443 17476.267 - 17581.545: 98.2229% ( 13) 00:10:08.443 17581.545 - 17686.824: 98.2900% ( 10) 00:10:08.443 17686.824 - 17792.103: 98.3570% ( 10) 00:10:08.443 17792.103 - 17897.382: 98.4509% ( 14) 00:10:08.443 17897.382 - 18002.660: 98.5582% ( 16) 00:10:08.443 18002.660 - 18107.939: 98.6655% ( 16) 00:10:08.443 18107.939 - 18213.218: 98.7124% ( 7) 00:10:08.443 19266.005 - 19371.284: 98.7259% ( 2) 00:10:08.443 19476.562 - 19581.841: 98.7728% ( 7) 00:10:08.443 19581.841 - 19687.120: 98.8197% ( 7) 00:10:08.443 19687.120 - 19792.398: 98.8667% ( 7) 00:10:08.443 19792.398 - 19897.677: 98.9002% ( 5) 00:10:08.443 19897.677 - 20002.956: 98.9539% ( 8) 00:10:08.443 20002.956 - 20108.235: 99.0075% ( 8) 00:10:08.443 20108.235 - 20213.513: 99.0410% ( 5) 00:10:08.443 20213.513 - 20318.792: 99.0612% ( 3) 00:10:08.443 20318.792 - 20424.071: 99.0813% ( 3) 00:10:08.443 20424.071 - 20529.349: 99.0947% ( 2) 00:10:08.443 20529.349 - 20634.628: 99.1148% ( 3) 00:10:08.443 20634.628 - 20739.907: 99.1416% ( 4) 00:10:08.443 38321.452 - 38532.010: 99.1819% ( 6) 00:10:08.443 38532.010 - 38742.567: 99.2355% ( 8) 00:10:08.443 38742.567 - 38953.124: 99.2892% ( 8) 00:10:08.443 38953.124 - 39163.682: 99.3428% ( 8) 00:10:08.443 39163.682 - 39374.239: 99.3965% ( 8) 00:10:08.443 39374.239 - 39584.797: 99.4501% ( 8) 00:10:08.443 39584.797 - 39795.354: 99.4970% ( 7) 00:10:08.443 39795.354 - 40005.912: 99.5507% ( 8) 00:10:08.443 40005.912 - 40216.469: 99.5708% ( 3) 00:10:08.443 46743.749 - 46954.307: 99.5842% ( 2) 00:10:08.443 46954.307 - 47164.864: 99.6312% ( 7) 00:10:08.443 47164.864 - 47375.422: 99.6848% ( 8) 00:10:08.443 47375.422 - 47585.979: 99.7318% ( 7) 00:10:08.443 47585.979 - 47796.537: 99.7787% ( 7) 00:10:08.443 47796.537 - 48007.094: 99.8256% ( 7) 00:10:08.443 48007.094 - 48217.651: 99.8793% ( 8) 00:10:08.443 48217.651 - 48428.209: 99.9262% ( 7) 00:10:08.443 48428.209 - 48638.766: 99.9732% ( 7) 00:10:08.443 48638.766 - 48849.324: 100.0000% ( 4) 00:10:08.443 00:10:08.443 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:08.443 ============================================================================== 00:10:08.443 Range in us Cumulative IO count 00:10:08.443 6474.641 - 6500.961: 0.0067% ( 1) 00:10:08.443 6632.559 - 6658.879: 0.0134% ( 1) 00:10:08.443 6658.879 - 6685.198: 0.0201% ( 1) 00:10:08.443 6685.198 - 6711.518: 0.0469% ( 4) 00:10:08.443 6711.518 - 6737.838: 0.0671% ( 3) 00:10:08.443 6737.838 - 6790.477: 0.1207% ( 8) 00:10:08.443 6790.477 - 6843.116: 0.2280% ( 16) 00:10:08.443 6843.116 - 6895.756: 0.3621% ( 20) 00:10:08.443 6895.756 - 6948.395: 0.6170% ( 38) 00:10:08.443 6948.395 - 7001.035: 0.8852% ( 40) 00:10:08.443 7001.035 - 7053.674: 1.1065% ( 33) 00:10:08.443 7053.674 - 7106.313: 1.4753% ( 55) 00:10:08.443 7106.313 - 7158.953: 1.8777% ( 60) 00:10:08.443 7158.953 - 7211.592: 2.5818% ( 105) 00:10:08.443 7211.592 - 7264.231: 3.0512% ( 70) 00:10:08.443 7264.231 - 7316.871: 3.6682% ( 92) 00:10:08.443 7316.871 - 7369.510: 4.6875% ( 152) 00:10:08.443 7369.510 - 7422.149: 6.0958% ( 210) 00:10:08.443 7422.149 - 7474.789: 7.9869% ( 282) 00:10:08.443 7474.789 - 7527.428: 10.5150% ( 377) 00:10:08.443 7527.428 - 7580.067: 13.1706% ( 396) 00:10:08.443 7580.067 - 7632.707: 15.8932% ( 406) 00:10:08.443 7632.707 - 7685.346: 19.0652% ( 473) 00:10:08.444 7685.346 - 7737.986: 22.3780% ( 494) 00:10:08.444 7737.986 - 7790.625: 25.8785% ( 522) 00:10:08.444 7790.625 - 7843.264: 30.1972% ( 644) 00:10:08.444 7843.264 - 7895.904: 34.7908% ( 685) 00:10:08.444 7895.904 - 7948.543: 39.5252% ( 706) 00:10:08.444 7948.543 - 8001.182: 43.9914% ( 666) 00:10:08.444 8001.182 - 8053.822: 48.2833% ( 640) 00:10:08.444 8053.822 - 8106.461: 52.3404% ( 605) 00:10:08.444 8106.461 - 8159.100: 56.1964% ( 575) 00:10:08.444 8159.100 - 8211.740: 60.1328% ( 587) 00:10:08.444 8211.740 - 8264.379: 64.0491% ( 584) 00:10:08.444 8264.379 - 8317.018: 68.5555% ( 672) 00:10:08.444 8317.018 - 8369.658: 72.0896% ( 527) 00:10:08.444 8369.658 - 8422.297: 75.8114% ( 555) 00:10:08.444 8422.297 - 8474.937: 79.1913% ( 504) 00:10:08.444 8474.937 - 8527.576: 82.1151% ( 436) 00:10:08.444 8527.576 - 8580.215: 84.5359% ( 361) 00:10:08.444 8580.215 - 8632.855: 86.4203% ( 281) 00:10:08.444 8632.855 - 8685.494: 87.9627% ( 230) 00:10:08.444 8685.494 - 8738.133: 89.2838% ( 197) 00:10:08.444 8738.133 - 8790.773: 90.1824% ( 134) 00:10:08.444 8790.773 - 8843.412: 90.7457% ( 84) 00:10:08.444 8843.412 - 8896.051: 91.2956% ( 82) 00:10:08.444 8896.051 - 8948.691: 91.6980% ( 60) 00:10:08.444 8948.691 - 9001.330: 91.9461% ( 37) 00:10:08.444 9001.330 - 9053.969: 92.2411% ( 44) 00:10:08.444 9053.969 - 9106.609: 92.5295% ( 43) 00:10:08.444 9106.609 - 9159.248: 92.8514% ( 48) 00:10:08.444 9159.248 - 9211.888: 93.1398% ( 43) 00:10:08.444 9211.888 - 9264.527: 93.5220% ( 57) 00:10:08.444 9264.527 - 9317.166: 93.8238% ( 45) 00:10:08.444 9317.166 - 9369.806: 94.1591% ( 50) 00:10:08.444 9369.806 - 9422.445: 94.5815% ( 63) 00:10:08.444 9422.445 - 9475.084: 94.7358% ( 23) 00:10:08.444 9475.084 - 9527.724: 94.8565% ( 18) 00:10:08.444 9527.724 - 9580.363: 94.9705% ( 17) 00:10:08.444 9580.363 - 9633.002: 95.1046% ( 20) 00:10:08.444 9633.002 - 9685.642: 95.1583% ( 8) 00:10:08.444 9685.642 - 9738.281: 95.2320% ( 11) 00:10:08.444 9738.281 - 9790.920: 95.2723% ( 6) 00:10:08.444 9790.920 - 9843.560: 95.3259% ( 8) 00:10:08.444 9843.560 - 9896.199: 95.3796% ( 8) 00:10:08.444 9896.199 - 9948.839: 95.5405% ( 24) 00:10:08.444 9948.839 - 10001.478: 95.6344% ( 14) 00:10:08.444 10001.478 - 10054.117: 95.7082% ( 11) 00:10:08.444 10054.117 - 10106.757: 95.8356% ( 19) 00:10:08.444 10106.757 - 10159.396: 95.9227% ( 13) 00:10:08.444 10159.396 - 10212.035: 96.0233% ( 15) 00:10:08.444 10212.035 - 10264.675: 96.0636% ( 6) 00:10:08.444 10264.675 - 10317.314: 96.0971% ( 5) 00:10:08.444 10317.314 - 10369.953: 96.1440% ( 7) 00:10:08.444 10369.953 - 10422.593: 96.1642% ( 3) 00:10:08.444 10422.593 - 10475.232: 96.2044% ( 6) 00:10:08.444 10475.232 - 10527.871: 96.2379% ( 5) 00:10:08.444 10527.871 - 10580.511: 96.2849% ( 7) 00:10:08.444 10580.511 - 10633.150: 96.4257% ( 21) 00:10:08.444 10633.150 - 10685.790: 96.4659% ( 6) 00:10:08.444 10685.790 - 10738.429: 96.4995% ( 5) 00:10:08.444 10738.429 - 10791.068: 96.5196% ( 3) 00:10:08.444 10791.068 - 10843.708: 96.5330% ( 2) 00:10:08.444 10843.708 - 10896.347: 96.5464% ( 2) 00:10:08.444 10896.347 - 10948.986: 96.5598% ( 2) 00:10:08.444 10948.986 - 11001.626: 96.5665% ( 1) 00:10:08.444 11054.265 - 11106.904: 96.5732% ( 1) 00:10:08.444 11212.183 - 11264.822: 96.5799% ( 1) 00:10:08.444 11264.822 - 11317.462: 96.5866% ( 1) 00:10:08.444 11370.101 - 11422.741: 96.6001% ( 2) 00:10:08.444 11422.741 - 11475.380: 96.6269% ( 4) 00:10:08.444 11475.380 - 11528.019: 96.6403% ( 2) 00:10:08.444 11528.019 - 11580.659: 96.6872% ( 7) 00:10:08.444 11580.659 - 11633.298: 96.7208% ( 5) 00:10:08.444 11633.298 - 11685.937: 96.7677% ( 7) 00:10:08.444 11685.937 - 11738.577: 96.8348% ( 10) 00:10:08.444 11738.577 - 11791.216: 96.9756% ( 21) 00:10:08.444 11791.216 - 11843.855: 97.1365% ( 24) 00:10:08.444 11843.855 - 11896.495: 97.2304% ( 14) 00:10:08.444 11896.495 - 11949.134: 97.2707% ( 6) 00:10:08.444 11949.134 - 12001.773: 97.3243% ( 8) 00:10:08.444 12001.773 - 12054.413: 97.3645% ( 6) 00:10:08.444 12054.413 - 12107.052: 97.3914% ( 4) 00:10:08.444 12107.052 - 12159.692: 97.4182% ( 4) 00:10:08.444 12159.692 - 12212.331: 97.4249% ( 1) 00:10:08.444 13054.561 - 13107.200: 97.4316% ( 1) 00:10:08.444 13212.479 - 13265.118: 97.4517% ( 3) 00:10:08.444 13265.118 - 13317.757: 97.4718% ( 3) 00:10:08.444 13317.757 - 13370.397: 97.4987% ( 4) 00:10:08.444 13370.397 - 13423.036: 97.5322% ( 5) 00:10:08.444 13423.036 - 13475.676: 97.5523% ( 3) 00:10:08.444 13475.676 - 13580.954: 97.6127% ( 9) 00:10:08.444 13580.954 - 13686.233: 97.7133% ( 15) 00:10:08.444 13686.233 - 13791.512: 97.7334% ( 3) 00:10:08.444 13791.512 - 13896.790: 97.7602% ( 4) 00:10:08.444 13896.790 - 14002.069: 97.7803% ( 3) 00:10:08.444 14002.069 - 14107.348: 97.8004% ( 3) 00:10:08.444 14107.348 - 14212.627: 97.8273% ( 4) 00:10:08.444 14212.627 - 14317.905: 97.8474% ( 3) 00:10:08.444 14317.905 - 14423.184: 97.8541% ( 1) 00:10:08.444 16528.758 - 16634.037: 97.9413% ( 13) 00:10:08.444 16634.037 - 16739.316: 98.0016% ( 9) 00:10:08.444 16739.316 - 16844.594: 98.0620% ( 9) 00:10:08.444 16844.594 - 16949.873: 98.0888% ( 4) 00:10:08.444 16949.873 - 17055.152: 98.1089% ( 3) 00:10:08.444 17055.152 - 17160.431: 98.1290% ( 3) 00:10:08.444 17160.431 - 17265.709: 98.1491% ( 3) 00:10:08.444 17265.709 - 17370.988: 98.1760% ( 4) 00:10:08.444 17370.988 - 17476.267: 98.2095% ( 5) 00:10:08.444 17476.267 - 17581.545: 98.2766% ( 10) 00:10:08.444 17581.545 - 17686.824: 98.3436% ( 10) 00:10:08.444 17686.824 - 17792.103: 98.4308% ( 13) 00:10:08.444 17792.103 - 17897.382: 98.4844% ( 8) 00:10:08.444 17897.382 - 18002.660: 98.5247% ( 6) 00:10:08.444 18002.660 - 18107.939: 98.6454% ( 18) 00:10:08.444 18107.939 - 18213.218: 98.6789% ( 5) 00:10:08.444 18213.218 - 18318.496: 98.7124% ( 5) 00:10:08.444 18529.054 - 18634.333: 98.7192% ( 1) 00:10:08.444 18634.333 - 18739.611: 98.7795% ( 9) 00:10:08.444 18739.611 - 18844.890: 98.8264% ( 7) 00:10:08.444 18844.890 - 18950.169: 98.8734% ( 7) 00:10:08.444 18950.169 - 19055.447: 98.9270% ( 8) 00:10:08.444 19055.447 - 19160.726: 98.9740% ( 7) 00:10:08.444 19160.726 - 19266.005: 99.0142% ( 6) 00:10:08.444 19266.005 - 19371.284: 99.0343% ( 3) 00:10:08.444 19371.284 - 19476.562: 99.0545% ( 3) 00:10:08.444 19476.562 - 19581.841: 99.0880% ( 5) 00:10:08.444 19581.841 - 19687.120: 99.1014% ( 2) 00:10:08.444 19687.120 - 19792.398: 99.1349% ( 5) 00:10:08.444 19792.398 - 19897.677: 99.1416% ( 1) 00:10:08.444 37058.108 - 37268.665: 99.1685% ( 4) 00:10:08.444 37268.665 - 37479.222: 99.2221% ( 8) 00:10:08.444 37479.222 - 37689.780: 99.2758% ( 8) 00:10:08.444 37689.780 - 37900.337: 99.3294% ( 8) 00:10:08.444 37900.337 - 38110.895: 99.3763% ( 7) 00:10:08.444 38110.895 - 38321.452: 99.4300% ( 8) 00:10:08.444 38321.452 - 38532.010: 99.4836% ( 8) 00:10:08.444 38532.010 - 38742.567: 99.5373% ( 8) 00:10:08.444 38742.567 - 38953.124: 99.5708% ( 5) 00:10:08.444 46112.077 - 46322.635: 99.6043% ( 5) 00:10:08.444 46322.635 - 46533.192: 99.6513% ( 7) 00:10:08.444 46533.192 - 46743.749: 99.6982% ( 7) 00:10:08.444 46743.749 - 46954.307: 99.7452% ( 7) 00:10:08.444 46954.307 - 47164.864: 99.7988% ( 8) 00:10:08.444 47164.864 - 47375.422: 99.8458% ( 7) 00:10:08.444 47375.422 - 47585.979: 99.8994% ( 8) 00:10:08.444 47585.979 - 47796.537: 99.9464% ( 7) 00:10:08.444 47796.537 - 48007.094: 99.9933% ( 7) 00:10:08.444 48007.094 - 48217.651: 100.0000% ( 1) 00:10:08.444 00:10:08.444 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:08.444 ============================================================================== 00:10:08.444 Range in us Cumulative IO count 00:10:08.444 6579.920 - 6606.239: 0.0134% ( 2) 00:10:08.444 6737.838 - 6790.477: 0.0334% ( 3) 00:10:08.444 6790.477 - 6843.116: 0.1068% ( 11) 00:10:08.444 6843.116 - 6895.756: 0.2003% ( 14) 00:10:08.444 6895.756 - 6948.395: 0.3272% ( 19) 00:10:08.444 6948.395 - 7001.035: 0.6076% ( 42) 00:10:08.444 7001.035 - 7053.674: 0.7612% ( 23) 00:10:08.444 7053.674 - 7106.313: 0.9415% ( 27) 00:10:08.444 7106.313 - 7158.953: 1.3221% ( 57) 00:10:08.445 7158.953 - 7211.592: 1.9030% ( 87) 00:10:08.445 7211.592 - 7264.231: 2.9981% ( 164) 00:10:08.445 7264.231 - 7316.871: 4.0732% ( 161) 00:10:08.445 7316.871 - 7369.510: 5.7492% ( 251) 00:10:08.445 7369.510 - 7422.149: 7.1648% ( 212) 00:10:08.445 7422.149 - 7474.789: 9.0612% ( 284) 00:10:08.445 7474.789 - 7527.428: 10.6904% ( 244) 00:10:08.445 7527.428 - 7580.067: 12.9474% ( 338) 00:10:08.445 7580.067 - 7632.707: 15.7051% ( 413) 00:10:08.445 7632.707 - 7685.346: 19.1039% ( 509) 00:10:08.445 7685.346 - 7737.986: 22.6696% ( 534) 00:10:08.445 7737.986 - 7790.625: 26.6493% ( 596) 00:10:08.445 7790.625 - 7843.264: 30.4688% ( 572) 00:10:08.445 7843.264 - 7895.904: 34.7022% ( 634) 00:10:08.445 7895.904 - 7948.543: 38.6418% ( 590) 00:10:08.445 7948.543 - 8001.182: 43.4829% ( 725) 00:10:08.445 8001.182 - 8053.822: 48.0836% ( 689) 00:10:08.445 8053.822 - 8106.461: 52.9581% ( 730) 00:10:08.445 8106.461 - 8159.100: 57.6790% ( 707) 00:10:08.445 8159.100 - 8211.740: 61.4850% ( 570) 00:10:08.445 8211.740 - 8264.379: 65.3646% ( 581) 00:10:08.445 8264.379 - 8317.018: 68.7366% ( 505) 00:10:08.445 8317.018 - 8369.658: 71.9885% ( 487) 00:10:08.445 8369.658 - 8422.297: 75.0334% ( 456) 00:10:08.445 8422.297 - 8474.937: 77.7911% ( 413) 00:10:08.445 8474.937 - 8527.576: 80.6023% ( 421) 00:10:08.445 8527.576 - 8580.215: 83.0529% ( 367) 00:10:08.445 8580.215 - 8632.855: 85.0227% ( 295) 00:10:08.445 8632.855 - 8685.494: 86.7455% ( 258) 00:10:08.445 8685.494 - 8738.133: 88.2479% ( 225) 00:10:08.445 8738.133 - 8790.773: 89.2561% ( 151) 00:10:08.445 8790.773 - 8843.412: 90.0040% ( 112) 00:10:08.445 8843.412 - 8896.051: 90.5849% ( 87) 00:10:08.445 8896.051 - 8948.691: 91.1325% ( 82) 00:10:08.445 8948.691 - 9001.330: 91.5665% ( 65) 00:10:08.445 9001.330 - 9053.969: 92.0272% ( 69) 00:10:08.445 9053.969 - 9106.609: 92.5347% ( 76) 00:10:08.445 9106.609 - 9159.248: 92.8285% ( 44) 00:10:08.445 9159.248 - 9211.888: 93.1424% ( 47) 00:10:08.445 9211.888 - 9264.527: 93.4762% ( 50) 00:10:08.445 9264.527 - 9317.166: 93.6498% ( 26) 00:10:08.445 9317.166 - 9369.806: 93.8635% ( 32) 00:10:08.445 9369.806 - 9422.445: 93.9837% ( 18) 00:10:08.445 9422.445 - 9475.084: 94.0839% ( 15) 00:10:08.445 9475.084 - 9527.724: 94.1573% ( 11) 00:10:08.445 9527.724 - 9580.363: 94.2975% ( 21) 00:10:08.445 9580.363 - 9633.002: 94.3510% ( 8) 00:10:08.445 9633.002 - 9685.642: 94.4311% ( 12) 00:10:08.445 9685.642 - 9738.281: 94.5045% ( 11) 00:10:08.445 9738.281 - 9790.920: 94.5446% ( 6) 00:10:08.445 9790.920 - 9843.560: 94.6448% ( 15) 00:10:08.445 9843.560 - 9896.199: 94.8451% ( 30) 00:10:08.445 9896.199 - 9948.839: 94.9519% ( 16) 00:10:08.445 9948.839 - 10001.478: 95.0721% ( 18) 00:10:08.445 10001.478 - 10054.117: 95.2257% ( 23) 00:10:08.445 10054.117 - 10106.757: 95.4728% ( 37) 00:10:08.445 10106.757 - 10159.396: 95.6464% ( 26) 00:10:08.445 10159.396 - 10212.035: 95.7999% ( 23) 00:10:08.445 10212.035 - 10264.675: 95.8667% ( 10) 00:10:08.445 10264.675 - 10317.314: 95.9135% ( 7) 00:10:08.445 10317.314 - 10369.953: 95.9468% ( 5) 00:10:08.445 10369.953 - 10422.593: 95.9736% ( 4) 00:10:08.445 10422.593 - 10475.232: 96.0003% ( 4) 00:10:08.445 10475.232 - 10527.871: 96.0403% ( 6) 00:10:08.445 10527.871 - 10580.511: 96.0737% ( 5) 00:10:08.445 10580.511 - 10633.150: 96.1138% ( 6) 00:10:08.445 10633.150 - 10685.790: 96.1338% ( 3) 00:10:08.445 10685.790 - 10738.429: 96.1472% ( 2) 00:10:08.445 10738.429 - 10791.068: 96.1538% ( 1) 00:10:08.445 10948.986 - 11001.626: 96.1605% ( 1) 00:10:08.445 11317.462 - 11370.101: 96.1672% ( 1) 00:10:08.445 11370.101 - 11422.741: 96.2006% ( 5) 00:10:08.445 11422.741 - 11475.380: 96.2273% ( 4) 00:10:08.445 11475.380 - 11528.019: 96.2607% ( 5) 00:10:08.445 11528.019 - 11580.659: 96.3942% ( 20) 00:10:08.445 11580.659 - 11633.298: 96.4744% ( 12) 00:10:08.445 11633.298 - 11685.937: 96.4944% ( 3) 00:10:08.445 11685.937 - 11738.577: 96.5077% ( 2) 00:10:08.445 11738.577 - 11791.216: 96.5278% ( 3) 00:10:08.445 11791.216 - 11843.855: 96.5411% ( 2) 00:10:08.445 11843.855 - 11896.495: 96.5678% ( 4) 00:10:08.445 11896.495 - 11949.134: 96.6012% ( 5) 00:10:08.445 11949.134 - 12001.773: 96.6213% ( 3) 00:10:08.445 12001.773 - 12054.413: 96.6413% ( 3) 00:10:08.445 12054.413 - 12107.052: 96.6613% ( 3) 00:10:08.445 12107.052 - 12159.692: 96.6814% ( 3) 00:10:08.445 12159.692 - 12212.331: 96.7081% ( 4) 00:10:08.445 12212.331 - 12264.970: 96.7281% ( 3) 00:10:08.445 12264.970 - 12317.610: 96.7682% ( 6) 00:10:08.445 12317.610 - 12370.249: 96.8283% ( 9) 00:10:08.445 12370.249 - 12422.888: 96.8683% ( 6) 00:10:08.445 12422.888 - 12475.528: 96.9217% ( 8) 00:10:08.445 12475.528 - 12528.167: 96.9685% ( 7) 00:10:08.445 12528.167 - 12580.806: 97.0019% ( 5) 00:10:08.445 12580.806 - 12633.446: 97.0085% ( 1) 00:10:08.445 12738.724 - 12791.364: 97.0353% ( 4) 00:10:08.445 12791.364 - 12844.003: 97.0486% ( 2) 00:10:08.445 12844.003 - 12896.643: 97.0553% ( 1) 00:10:08.445 12896.643 - 12949.282: 97.0686% ( 2) 00:10:08.445 12949.282 - 13001.921: 97.0887% ( 3) 00:10:08.445 13001.921 - 13054.561: 97.1020% ( 2) 00:10:08.445 13054.561 - 13107.200: 97.1221% ( 3) 00:10:08.445 13107.200 - 13159.839: 97.1354% ( 2) 00:10:08.445 13159.839 - 13212.479: 97.2690% ( 20) 00:10:08.445 13212.479 - 13265.118: 97.3024% ( 5) 00:10:08.445 13265.118 - 13317.757: 97.3157% ( 2) 00:10:08.445 13317.757 - 13370.397: 97.3291% ( 2) 00:10:08.445 13370.397 - 13423.036: 97.3424% ( 2) 00:10:08.445 13423.036 - 13475.676: 97.3558% ( 2) 00:10:08.445 13475.676 - 13580.954: 97.3758% ( 3) 00:10:08.445 13580.954 - 13686.233: 97.4025% ( 4) 00:10:08.445 13686.233 - 13791.512: 97.4225% ( 3) 00:10:08.445 13791.512 - 13896.790: 97.4359% ( 2) 00:10:08.445 15686.529 - 15791.807: 97.4693% ( 5) 00:10:08.445 15791.807 - 15897.086: 97.5494% ( 12) 00:10:08.445 15897.086 - 16002.365: 97.6229% ( 11) 00:10:08.445 16002.365 - 16107.643: 97.7297% ( 16) 00:10:08.445 16107.643 - 16212.922: 97.8632% ( 20) 00:10:08.445 16212.922 - 16318.201: 97.9501% ( 13) 00:10:08.445 16318.201 - 16423.480: 98.0502% ( 15) 00:10:08.445 16423.480 - 16528.758: 98.1571% ( 16) 00:10:08.445 16528.758 - 16634.037: 98.1904% ( 5) 00:10:08.445 16634.037 - 16739.316: 98.2038% ( 2) 00:10:08.445 16739.316 - 16844.594: 98.2238% ( 3) 00:10:08.445 16844.594 - 16949.873: 98.2505% ( 4) 00:10:08.445 16949.873 - 17055.152: 98.2706% ( 3) 00:10:08.445 17055.152 - 17160.431: 98.2906% ( 3) 00:10:08.445 18107.939 - 18213.218: 98.3373% ( 7) 00:10:08.445 18213.218 - 18318.496: 98.3974% ( 9) 00:10:08.445 18318.496 - 18423.775: 98.4509% ( 8) 00:10:08.445 18423.775 - 18529.054: 98.5777% ( 19) 00:10:08.445 18529.054 - 18634.333: 98.7246% ( 22) 00:10:08.445 18634.333 - 18739.611: 98.8114% ( 13) 00:10:08.445 18739.611 - 18844.890: 98.9116% ( 15) 00:10:08.445 18844.890 - 18950.169: 98.9517% ( 6) 00:10:08.445 18950.169 - 19055.447: 98.9917% ( 6) 00:10:08.445 19055.447 - 19160.726: 99.0318% ( 6) 00:10:08.445 19160.726 - 19266.005: 99.0585% ( 4) 00:10:08.445 19266.005 - 19371.284: 99.0852% ( 4) 00:10:08.445 19371.284 - 19476.562: 99.1052% ( 3) 00:10:08.445 19476.562 - 19581.841: 99.1319% ( 4) 00:10:08.445 19581.841 - 19687.120: 99.1453% ( 2) 00:10:08.445 27793.581 - 28004.138: 99.1920% ( 7) 00:10:08.445 28004.138 - 28214.696: 99.2455% ( 8) 00:10:08.445 28214.696 - 28425.253: 99.2989% ( 8) 00:10:08.445 28425.253 - 28635.810: 99.3523% ( 8) 00:10:08.445 28635.810 - 28846.368: 99.4124% ( 9) 00:10:08.445 28846.368 - 29056.925: 99.4658% ( 8) 00:10:08.445 29056.925 - 29267.483: 99.5192% ( 8) 00:10:08.445 29267.483 - 29478.040: 99.5660% ( 7) 00:10:08.445 29478.040 - 29688.598: 99.5726% ( 1) 00:10:08.445 36636.993 - 36847.550: 99.5994% ( 4) 00:10:08.445 36847.550 - 37058.108: 99.6595% ( 9) 00:10:08.445 37058.108 - 37268.665: 99.7062% ( 7) 00:10:08.445 37268.665 - 37479.222: 99.7596% ( 8) 00:10:08.445 37479.222 - 37689.780: 99.8130% ( 8) 00:10:08.445 37689.780 - 37900.337: 99.8598% ( 7) 00:10:08.445 37900.337 - 38110.895: 99.9199% ( 9) 00:10:08.445 38110.895 - 38321.452: 99.9733% ( 8) 00:10:08.445 38321.452 - 38532.010: 100.0000% ( 4) 00:10:08.445 00:10:08.445 17:59:37 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:08.445 00:10:08.445 real 0m2.695s 00:10:08.445 user 0m2.285s 00:10:08.445 sys 0m0.318s 00:10:08.445 17:59:37 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:08.445 ************************************ 00:10:08.445 END TEST nvme_perf 00:10:08.445 ************************************ 00:10:08.445 17:59:37 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:08.445 17:59:37 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:08.445 17:59:37 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:08.445 17:59:37 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:08.445 17:59:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.445 ************************************ 00:10:08.445 START TEST nvme_hello_world 00:10:08.445 ************************************ 00:10:08.445 17:59:37 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:08.705 Initializing NVMe Controllers 00:10:08.705 Attached to 0000:00:10.0 00:10:08.705 Namespace ID: 1 size: 6GB 00:10:08.705 Attached to 0000:00:11.0 00:10:08.705 Namespace ID: 1 size: 5GB 00:10:08.705 Attached to 0000:00:13.0 00:10:08.705 Namespace ID: 1 size: 1GB 00:10:08.705 Attached to 0000:00:12.0 00:10:08.705 Namespace ID: 1 size: 4GB 00:10:08.705 Namespace ID: 2 size: 4GB 00:10:08.705 Namespace ID: 3 size: 4GB 00:10:08.705 Initialization complete. 00:10:08.705 INFO: using host memory buffer for IO 00:10:08.705 Hello world! 00:10:08.705 INFO: using host memory buffer for IO 00:10:08.705 Hello world! 00:10:08.705 INFO: using host memory buffer for IO 00:10:08.705 Hello world! 00:10:08.705 INFO: using host memory buffer for IO 00:10:08.705 Hello world! 00:10:08.705 INFO: using host memory buffer for IO 00:10:08.705 Hello world! 00:10:08.705 INFO: using host memory buffer for IO 00:10:08.705 Hello world! 00:10:08.705 00:10:08.705 real 0m0.305s 00:10:08.705 user 0m0.119s 00:10:08.705 sys 0m0.140s 00:10:08.705 17:59:37 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:08.705 17:59:37 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:08.705 ************************************ 00:10:08.705 END TEST nvme_hello_world 00:10:08.705 ************************************ 00:10:08.705 17:59:38 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:08.705 17:59:38 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:08.705 17:59:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:08.705 17:59:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.965 ************************************ 00:10:08.965 START TEST nvme_sgl 00:10:08.965 ************************************ 00:10:08.965 17:59:38 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:08.965 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:08.965 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:09.224 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:09.224 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:09.224 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:09.224 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:09.224 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:09.224 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:09.224 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:09.224 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:09.224 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:09.224 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:09.224 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:09.224 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:09.224 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:09.224 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:09.224 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:09.224 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:09.224 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:09.224 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:09.224 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:09.224 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:09.224 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:09.224 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:09.224 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:09.224 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:09.224 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:09.224 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:09.224 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:09.224 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:09.224 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:09.224 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:09.224 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:09.224 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:09.224 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:09.224 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:09.224 NVMe Readv/Writev Request test 00:10:09.224 Attached to 0000:00:10.0 00:10:09.224 Attached to 0000:00:11.0 00:10:09.224 Attached to 0000:00:13.0 00:10:09.224 Attached to 0000:00:12.0 00:10:09.224 0000:00:10.0: build_io_request_2 test passed 00:10:09.224 0000:00:10.0: build_io_request_4 test passed 00:10:09.224 0000:00:10.0: build_io_request_5 test passed 00:10:09.224 0000:00:10.0: build_io_request_6 test passed 00:10:09.224 0000:00:10.0: build_io_request_7 test passed 00:10:09.224 0000:00:10.0: build_io_request_10 test passed 00:10:09.224 0000:00:11.0: build_io_request_2 test passed 00:10:09.224 0000:00:11.0: build_io_request_4 test passed 00:10:09.224 0000:00:11.0: build_io_request_5 test passed 00:10:09.224 0000:00:11.0: build_io_request_6 test passed 00:10:09.224 0000:00:11.0: build_io_request_7 test passed 00:10:09.224 0000:00:11.0: build_io_request_10 test passed 00:10:09.224 Cleaning up... 00:10:09.224 00:10:09.224 real 0m0.368s 00:10:09.224 user 0m0.180s 00:10:09.224 sys 0m0.144s 00:10:09.224 17:59:38 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:09.224 17:59:38 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:09.224 ************************************ 00:10:09.224 END TEST nvme_sgl 00:10:09.224 ************************************ 00:10:09.224 17:59:38 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:09.224 17:59:38 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:09.224 17:59:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:09.224 17:59:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:09.224 ************************************ 00:10:09.224 START TEST nvme_e2edp 00:10:09.224 ************************************ 00:10:09.224 17:59:38 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:09.483 NVMe Write/Read with End-to-End data protection test 00:10:09.483 Attached to 0000:00:10.0 00:10:09.483 Attached to 0000:00:11.0 00:10:09.483 Attached to 0000:00:13.0 00:10:09.483 Attached to 0000:00:12.0 00:10:09.483 Cleaning up... 00:10:09.483 00:10:09.483 real 0m0.310s 00:10:09.483 user 0m0.105s 00:10:09.483 sys 0m0.155s 00:10:09.483 17:59:38 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:09.483 17:59:38 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:09.483 ************************************ 00:10:09.483 END TEST nvme_e2edp 00:10:09.483 ************************************ 00:10:09.742 17:59:38 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:09.742 17:59:38 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:09.742 17:59:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:09.742 17:59:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:09.742 ************************************ 00:10:09.742 START TEST nvme_reserve 00:10:09.742 ************************************ 00:10:09.742 17:59:38 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:10.002 ===================================================== 00:10:10.002 NVMe Controller at PCI bus 0, device 16, function 0 00:10:10.002 ===================================================== 00:10:10.002 Reservations: Not Supported 00:10:10.002 ===================================================== 00:10:10.002 NVMe Controller at PCI bus 0, device 17, function 0 00:10:10.002 ===================================================== 00:10:10.002 Reservations: Not Supported 00:10:10.002 ===================================================== 00:10:10.002 NVMe Controller at PCI bus 0, device 19, function 0 00:10:10.002 ===================================================== 00:10:10.002 Reservations: Not Supported 00:10:10.002 ===================================================== 00:10:10.002 NVMe Controller at PCI bus 0, device 18, function 0 00:10:10.002 ===================================================== 00:10:10.002 Reservations: Not Supported 00:10:10.002 Reservation test passed 00:10:10.002 00:10:10.002 real 0m0.293s 00:10:10.002 user 0m0.101s 00:10:10.002 sys 0m0.147s 00:10:10.002 17:59:39 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:10.002 17:59:39 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:10.002 ************************************ 00:10:10.002 END TEST nvme_reserve 00:10:10.002 ************************************ 00:10:10.002 17:59:39 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:10.002 17:59:39 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:10.002 17:59:39 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.002 17:59:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:10.002 ************************************ 00:10:10.002 START TEST nvme_err_injection 00:10:10.002 ************************************ 00:10:10.002 17:59:39 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:10.261 NVMe Error Injection test 00:10:10.261 Attached to 0000:00:10.0 00:10:10.261 Attached to 0000:00:11.0 00:10:10.261 Attached to 0000:00:13.0 00:10:10.261 Attached to 0000:00:12.0 00:10:10.261 0000:00:13.0: get features failed as expected 00:10:10.261 0000:00:12.0: get features failed as expected 00:10:10.261 0000:00:10.0: get features failed as expected 00:10:10.261 0000:00:11.0: get features failed as expected 00:10:10.261 0000:00:10.0: get features successfully as expected 00:10:10.261 0000:00:11.0: get features successfully as expected 00:10:10.261 0000:00:13.0: get features successfully as expected 00:10:10.261 0000:00:12.0: get features successfully as expected 00:10:10.261 0000:00:11.0: read failed as expected 00:10:10.261 0000:00:13.0: read failed as expected 00:10:10.261 0000:00:12.0: read failed as expected 00:10:10.261 0000:00:10.0: read failed as expected 00:10:10.261 0000:00:11.0: read successfully as expected 00:10:10.261 0000:00:10.0: read successfully as expected 00:10:10.261 0000:00:13.0: read successfully as expected 00:10:10.261 0000:00:12.0: read successfully as expected 00:10:10.261 Cleaning up... 00:10:10.261 00:10:10.261 real 0m0.312s 00:10:10.261 user 0m0.127s 00:10:10.261 sys 0m0.140s 00:10:10.261 17:59:39 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:10.261 17:59:39 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:10.261 ************************************ 00:10:10.261 END TEST nvme_err_injection 00:10:10.261 ************************************ 00:10:10.261 17:59:39 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:10.261 17:59:39 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:10:10.262 17:59:39 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.262 17:59:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:10.519 ************************************ 00:10:10.519 START TEST nvme_overhead 00:10:10.519 ************************************ 00:10:10.519 17:59:39 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:11.898 Initializing NVMe Controllers 00:10:11.898 Attached to 0000:00:10.0 00:10:11.898 Attached to 0000:00:11.0 00:10:11.898 Attached to 0000:00:13.0 00:10:11.898 Attached to 0000:00:12.0 00:10:11.898 Initialization complete. Launching workers. 00:10:11.898 submit (in ns) avg, min, max = 13403.0, 11451.4, 102136.5 00:10:11.898 complete (in ns) avg, min, max = 9131.0, 7684.3, 513619.3 00:10:11.898 00:10:11.898 Submit histogram 00:10:11.898 ================ 00:10:11.898 Range in us Cumulative Count 00:10:11.898 11.412 - 11.463: 0.0169% ( 1) 00:10:11.898 11.875 - 11.926: 0.0844% ( 4) 00:10:11.898 11.926 - 11.978: 0.2700% ( 11) 00:10:11.898 11.978 - 12.029: 0.5062% ( 14) 00:10:11.898 12.029 - 12.080: 0.8436% ( 20) 00:10:11.898 12.080 - 12.132: 1.2823% ( 26) 00:10:11.898 12.132 - 12.183: 1.5016% ( 13) 00:10:11.898 12.183 - 12.235: 1.8053% ( 18) 00:10:11.898 12.235 - 12.286: 2.0752% ( 16) 00:10:11.898 12.286 - 12.337: 2.4296% ( 21) 00:10:11.898 12.337 - 12.389: 2.6658% ( 14) 00:10:11.898 12.389 - 12.440: 2.8851% ( 13) 00:10:11.898 12.440 - 12.492: 3.1213% ( 14) 00:10:11.898 12.492 - 12.543: 3.5600% ( 26) 00:10:11.898 12.543 - 12.594: 3.9143% ( 21) 00:10:11.898 12.594 - 12.646: 4.8422% ( 55) 00:10:11.898 12.646 - 12.697: 6.6307% ( 106) 00:10:11.898 12.697 - 12.749: 10.4943% ( 229) 00:10:11.898 12.749 - 12.800: 17.0238% ( 387) 00:10:11.898 12.800 - 12.851: 25.0380% ( 475) 00:10:11.899 12.851 - 12.903: 33.2546% ( 487) 00:10:11.899 12.903 - 12.954: 41.6400% ( 497) 00:10:11.899 12.954 - 13.006: 49.3504% ( 457) 00:10:11.899 13.006 - 13.057: 56.5041% ( 424) 00:10:11.899 13.057 - 13.108: 62.8311% ( 375) 00:10:11.899 13.108 - 13.160: 69.3606% ( 387) 00:10:11.899 13.160 - 13.263: 79.5850% ( 606) 00:10:11.899 13.263 - 13.365: 85.3383% ( 341) 00:10:11.899 13.365 - 13.468: 88.5271% ( 189) 00:10:11.899 13.468 - 13.571: 90.8385% ( 137) 00:10:11.899 13.571 - 13.674: 92.2220% ( 82) 00:10:11.899 13.674 - 13.777: 93.1669% ( 56) 00:10:11.899 13.777 - 13.880: 93.7574% ( 35) 00:10:11.899 13.880 - 13.982: 94.0105% ( 15) 00:10:11.899 13.982 - 14.085: 94.2635% ( 15) 00:10:11.899 14.085 - 14.188: 94.3310% ( 4) 00:10:11.899 14.188 - 14.291: 94.3816% ( 3) 00:10:11.899 14.291 - 14.394: 94.4829% ( 6) 00:10:11.899 14.394 - 14.496: 94.5166% ( 2) 00:10:11.899 14.599 - 14.702: 94.5672% ( 3) 00:10:11.899 14.702 - 14.805: 94.6010% ( 2) 00:10:11.899 14.908 - 15.010: 94.6179% ( 1) 00:10:11.899 15.113 - 15.216: 94.6347% ( 1) 00:10:11.899 15.319 - 15.422: 94.6516% ( 1) 00:10:11.899 15.627 - 15.730: 94.6685% ( 1) 00:10:11.899 15.730 - 15.833: 94.6853% ( 1) 00:10:11.899 15.936 - 16.039: 94.7022% ( 1) 00:10:11.899 16.141 - 16.244: 94.7360% ( 2) 00:10:11.899 16.244 - 16.347: 94.8034% ( 4) 00:10:11.899 16.450 - 16.553: 94.8203% ( 1) 00:10:11.899 16.553 - 16.655: 94.8372% ( 1) 00:10:11.899 16.655 - 16.758: 94.8541% ( 1) 00:10:11.899 16.758 - 16.861: 94.9047% ( 3) 00:10:11.899 16.861 - 16.964: 95.0059% ( 6) 00:10:11.899 16.964 - 17.067: 95.1746% ( 10) 00:10:11.899 17.067 - 17.169: 95.3096% ( 8) 00:10:11.899 17.169 - 17.272: 95.4614% ( 9) 00:10:11.899 17.272 - 17.375: 95.5796% ( 7) 00:10:11.899 17.375 - 17.478: 95.8832% ( 18) 00:10:11.899 17.478 - 17.581: 96.2544% ( 22) 00:10:11.899 17.581 - 17.684: 96.3388% ( 5) 00:10:11.899 17.684 - 17.786: 96.5750% ( 14) 00:10:11.899 17.786 - 17.889: 96.8449% ( 16) 00:10:11.899 17.889 - 17.992: 96.9968% ( 9) 00:10:11.899 17.992 - 18.095: 97.1655% ( 10) 00:10:11.899 18.095 - 18.198: 97.3005% ( 8) 00:10:11.899 18.198 - 18.300: 97.4186% ( 7) 00:10:11.899 18.300 - 18.403: 97.6548% ( 14) 00:10:11.899 18.403 - 18.506: 97.8741% ( 13) 00:10:11.899 18.506 - 18.609: 97.9585% ( 5) 00:10:11.899 18.609 - 18.712: 98.0935% ( 8) 00:10:11.899 18.712 - 18.814: 98.2116% ( 7) 00:10:11.899 18.814 - 18.917: 98.3465% ( 8) 00:10:11.899 18.917 - 19.020: 98.4647% ( 7) 00:10:11.899 19.020 - 19.123: 98.5153% ( 3) 00:10:11.899 19.123 - 19.226: 98.5996% ( 5) 00:10:11.899 19.226 - 19.329: 98.6671% ( 4) 00:10:11.899 19.329 - 19.431: 98.7177% ( 3) 00:10:11.899 19.431 - 19.534: 98.7852% ( 4) 00:10:11.899 19.534 - 19.637: 98.8190% ( 2) 00:10:11.899 19.637 - 19.740: 98.8358% ( 1) 00:10:11.899 19.843 - 19.945: 98.8696% ( 2) 00:10:11.899 20.048 - 20.151: 98.9708% ( 6) 00:10:11.899 20.151 - 20.254: 99.0214% ( 3) 00:10:11.899 20.254 - 20.357: 99.0552% ( 2) 00:10:11.899 20.357 - 20.459: 99.0889% ( 2) 00:10:11.899 20.459 - 20.562: 99.1564% ( 4) 00:10:11.899 20.562 - 20.665: 99.1733% ( 1) 00:10:11.899 20.665 - 20.768: 99.2070% ( 2) 00:10:11.899 20.768 - 20.871: 99.2239% ( 1) 00:10:11.899 20.871 - 20.973: 99.2408% ( 1) 00:10:11.899 21.179 - 21.282: 99.2745% ( 2) 00:10:11.899 21.282 - 21.385: 99.2914% ( 1) 00:10:11.899 21.385 - 21.488: 99.3083% ( 1) 00:10:11.899 21.590 - 21.693: 99.3251% ( 1) 00:10:11.899 22.207 - 22.310: 99.3420% ( 1) 00:10:11.899 22.310 - 22.413: 99.3589% ( 1) 00:10:11.899 22.413 - 22.516: 99.3757% ( 1) 00:10:11.899 22.824 - 22.927: 99.3926% ( 1) 00:10:11.899 22.927 - 23.030: 99.4095% ( 1) 00:10:11.899 23.030 - 23.133: 99.4432% ( 2) 00:10:11.899 23.235 - 23.338: 99.4601% ( 1) 00:10:11.899 23.338 - 23.441: 99.4770% ( 1) 00:10:11.899 23.749 - 23.852: 99.4938% ( 1) 00:10:11.899 24.058 - 24.161: 99.5107% ( 1) 00:10:11.899 24.263 - 24.366: 99.5276% ( 1) 00:10:11.899 24.366 - 24.469: 99.5445% ( 1) 00:10:11.899 24.778 - 24.880: 99.5613% ( 1) 00:10:11.899 24.983 - 25.086: 99.5782% ( 1) 00:10:11.899 25.292 - 25.394: 99.5951% ( 1) 00:10:11.899 25.600 - 25.703: 99.6119% ( 1) 00:10:11.899 26.320 - 26.525: 99.6288% ( 1) 00:10:11.899 26.731 - 26.937: 99.6457% ( 1) 00:10:11.899 26.937 - 27.142: 99.6626% ( 1) 00:10:11.899 27.142 - 27.348: 99.6794% ( 1) 00:10:11.899 27.553 - 27.759: 99.6963% ( 1) 00:10:11.899 28.170 - 28.376: 99.7132% ( 1) 00:10:11.899 29.198 - 29.404: 99.7300% ( 1) 00:10:11.899 29.404 - 29.610: 99.7469% ( 1) 00:10:11.899 30.843 - 31.049: 99.7638% ( 1) 00:10:11.899 32.488 - 32.694: 99.7975% ( 2) 00:10:11.899 34.339 - 34.545: 99.8144% ( 1) 00:10:11.899 37.629 - 37.835: 99.8313% ( 1) 00:10:11.899 37.835 - 38.040: 99.8482% ( 1) 00:10:11.899 40.508 - 40.713: 99.8650% ( 1) 00:10:11.899 41.536 - 41.741: 99.8988% ( 2) 00:10:11.899 41.947 - 42.153: 99.9156% ( 1) 00:10:11.899 45.443 - 45.648: 99.9325% ( 1) 00:10:11.899 51.200 - 51.406: 99.9494% ( 1) 00:10:11.899 80.604 - 81.015: 99.9663% ( 1) 00:10:11.899 99.110 - 99.521: 99.9831% ( 1) 00:10:11.899 101.989 - 102.400: 100.0000% ( 1) 00:10:11.899 00:10:11.899 Complete histogram 00:10:11.899 ================== 00:10:11.899 Range in us Cumulative Count 00:10:11.899 7.659 - 7.711: 0.1012% ( 6) 00:10:11.899 7.711 - 7.762: 0.9280% ( 49) 00:10:11.899 7.762 - 7.814: 2.3621% ( 85) 00:10:11.899 7.814 - 7.865: 4.3867% ( 120) 00:10:11.899 7.865 - 7.916: 7.7105% ( 197) 00:10:11.899 7.916 - 7.968: 9.8870% ( 129) 00:10:11.899 7.968 - 8.019: 12.4009% ( 149) 00:10:11.899 8.019 - 8.071: 14.8473% ( 145) 00:10:11.899 8.071 - 8.122: 16.2646% ( 84) 00:10:11.899 8.122 - 8.173: 16.9226% ( 39) 00:10:11.899 8.173 - 8.225: 17.1925% ( 16) 00:10:11.899 8.225 - 8.276: 17.3612% ( 10) 00:10:11.899 8.276 - 8.328: 17.4962% ( 8) 00:10:11.899 8.328 - 8.379: 17.5637% ( 4) 00:10:11.899 8.379 - 8.431: 17.6143% ( 3) 00:10:11.899 8.431 - 8.482: 17.6818% ( 4) 00:10:11.899 8.482 - 8.533: 17.7155% ( 2) 00:10:11.899 8.533 - 8.585: 17.7830% ( 4) 00:10:11.899 8.585 - 8.636: 18.0698% ( 17) 00:10:11.899 8.636 - 8.688: 20.3307% ( 134) 00:10:11.899 8.688 - 8.739: 23.5364% ( 190) 00:10:11.899 8.739 - 8.790: 25.6791% ( 127) 00:10:11.899 8.790 - 8.842: 30.5213% ( 287) 00:10:11.899 8.842 - 8.893: 38.2656% ( 459) 00:10:11.899 8.893 - 8.945: 47.1233% ( 525) 00:10:11.899 8.945 - 8.996: 53.3660% ( 370) 00:10:11.899 8.996 - 9.047: 62.0381% ( 514) 00:10:11.899 9.047 - 9.099: 69.5124% ( 443) 00:10:11.899 9.099 - 9.150: 74.9958% ( 325) 00:10:11.899 9.150 - 9.202: 79.9899% ( 296) 00:10:11.899 9.202 - 9.253: 84.3428% ( 258) 00:10:11.899 9.253 - 9.304: 87.6329% ( 195) 00:10:11.899 9.304 - 9.356: 90.3492% ( 161) 00:10:11.899 9.356 - 9.407: 92.4076% ( 122) 00:10:11.899 9.407 - 9.459: 93.6730% ( 75) 00:10:11.899 9.459 - 9.510: 94.7697% ( 65) 00:10:11.899 9.510 - 9.561: 95.3265% ( 33) 00:10:11.899 9.561 - 9.613: 95.6302% ( 18) 00:10:11.899 9.613 - 9.664: 95.9001% ( 16) 00:10:11.899 9.664 - 9.716: 96.3050% ( 24) 00:10:11.899 9.716 - 9.767: 96.5413% ( 14) 00:10:11.899 9.767 - 9.818: 96.7943% ( 15) 00:10:11.899 9.818 - 9.870: 97.0305% ( 14) 00:10:11.899 9.870 - 9.921: 97.1993% ( 10) 00:10:11.899 9.921 - 9.973: 97.3174% ( 7) 00:10:11.899 9.973 - 10.024: 97.4692% ( 9) 00:10:11.899 10.024 - 10.076: 97.5198% ( 3) 00:10:11.899 10.076 - 10.127: 97.5704% ( 3) 00:10:11.899 10.127 - 10.178: 97.6885% ( 7) 00:10:11.899 10.178 - 10.230: 97.7223% ( 2) 00:10:11.899 10.230 - 10.281: 97.7392% ( 1) 00:10:11.899 10.281 - 10.333: 97.7898% ( 3) 00:10:11.899 10.333 - 10.384: 97.8235% ( 2) 00:10:11.899 10.384 - 10.435: 97.8741% ( 3) 00:10:11.899 10.435 - 10.487: 97.9248% ( 3) 00:10:11.899 10.590 - 10.641: 97.9416% ( 1) 00:10:11.899 10.641 - 10.692: 97.9585% ( 1) 00:10:11.899 10.744 - 10.795: 97.9922% ( 2) 00:10:11.899 11.463 - 11.515: 98.0091% ( 1) 00:10:11.899 11.669 - 11.720: 98.0260% ( 1) 00:10:11.899 11.720 - 11.772: 98.0429% ( 1) 00:10:11.899 11.772 - 11.823: 98.0597% ( 1) 00:10:11.899 12.029 - 12.080: 98.0766% ( 1) 00:10:11.900 12.080 - 12.132: 98.1103% ( 2) 00:10:11.900 12.132 - 12.183: 98.1272% ( 1) 00:10:11.900 12.800 - 12.851: 98.1441% ( 1) 00:10:11.900 12.903 - 12.954: 98.1610% ( 1) 00:10:11.900 12.954 - 13.006: 98.1778% ( 1) 00:10:11.900 13.006 - 13.057: 98.1947% ( 1) 00:10:11.900 13.108 - 13.160: 98.2284% ( 2) 00:10:11.900 13.263 - 13.365: 98.2453% ( 1) 00:10:11.900 13.365 - 13.468: 98.2959% ( 3) 00:10:11.900 13.571 - 13.674: 98.3128% ( 1) 00:10:11.900 13.674 - 13.777: 98.3465% ( 2) 00:10:11.900 13.777 - 13.880: 98.3972% ( 3) 00:10:11.900 13.880 - 13.982: 98.4140% ( 1) 00:10:11.900 14.085 - 14.188: 98.4647% ( 3) 00:10:11.900 14.188 - 14.291: 98.5153% ( 3) 00:10:11.900 14.291 - 14.394: 98.5659% ( 3) 00:10:11.900 14.394 - 14.496: 98.6671% ( 6) 00:10:11.900 14.496 - 14.599: 98.8190% ( 9) 00:10:11.900 14.599 - 14.702: 99.0046% ( 11) 00:10:11.900 14.702 - 14.805: 99.1227% ( 7) 00:10:11.900 14.805 - 14.908: 99.1733% ( 3) 00:10:11.900 14.908 - 15.010: 99.2576% ( 5) 00:10:11.900 15.010 - 15.113: 99.3251% ( 4) 00:10:11.900 15.113 - 15.216: 99.3757% ( 3) 00:10:11.900 15.216 - 15.319: 99.4601% ( 5) 00:10:11.900 15.319 - 15.422: 99.4770% ( 1) 00:10:11.900 15.422 - 15.524: 99.5107% ( 2) 00:10:11.900 15.524 - 15.627: 99.5276% ( 1) 00:10:11.900 15.627 - 15.730: 99.5445% ( 1) 00:10:11.900 15.730 - 15.833: 99.5613% ( 1) 00:10:11.900 15.833 - 15.936: 99.5782% ( 1) 00:10:11.900 16.141 - 16.244: 99.5951% ( 1) 00:10:11.900 16.655 - 16.758: 99.6119% ( 1) 00:10:11.900 16.861 - 16.964: 99.6288% ( 1) 00:10:11.900 20.562 - 20.665: 99.6457% ( 1) 00:10:11.900 21.488 - 21.590: 99.6626% ( 1) 00:10:11.900 21.899 - 22.002: 99.6794% ( 1) 00:10:11.900 22.413 - 22.516: 99.6963% ( 1) 00:10:11.900 23.030 - 23.133: 99.7132% ( 1) 00:10:11.900 25.497 - 25.600: 99.7300% ( 1) 00:10:11.900 25.600 - 25.703: 99.7469% ( 1) 00:10:11.900 25.703 - 25.806: 99.7638% ( 1) 00:10:11.900 27.142 - 27.348: 99.7807% ( 1) 00:10:11.900 27.348 - 27.553: 99.7975% ( 1) 00:10:11.900 29.404 - 29.610: 99.8144% ( 1) 00:10:11.900 31.871 - 32.077: 99.8313% ( 1) 00:10:11.900 35.367 - 35.573: 99.8482% ( 1) 00:10:11.900 38.657 - 38.863: 99.8650% ( 1) 00:10:11.900 41.741 - 41.947: 99.8819% ( 1) 00:10:11.900 44.209 - 44.414: 99.8988% ( 1) 00:10:11.900 44.414 - 44.620: 99.9156% ( 1) 00:10:11.900 51.611 - 51.817: 99.9325% ( 1) 00:10:11.900 52.639 - 53.051: 99.9494% ( 1) 00:10:11.900 73.613 - 74.024: 99.9663% ( 1) 00:10:11.900 91.708 - 92.119: 99.9831% ( 1) 00:10:11.900 513.234 - 516.524: 100.0000% ( 1) 00:10:11.900 00:10:11.900 00:10:11.900 real 0m1.302s 00:10:11.900 user 0m1.107s 00:10:11.900 sys 0m0.147s 00:10:11.900 17:59:40 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:11.900 ************************************ 00:10:11.900 END TEST nvme_overhead 00:10:11.900 17:59:40 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:11.900 ************************************ 00:10:11.900 17:59:40 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:11.900 17:59:40 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:10:11.900 17:59:40 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:11.900 17:59:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:11.900 ************************************ 00:10:11.900 START TEST nvme_arbitration 00:10:11.900 ************************************ 00:10:11.900 17:59:40 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:15.193 Initializing NVMe Controllers 00:10:15.193 Attached to 0000:00:10.0 00:10:15.193 Attached to 0000:00:11.0 00:10:15.193 Attached to 0000:00:13.0 00:10:15.193 Attached to 0000:00:12.0 00:10:15.193 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:15.193 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:15.193 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:15.193 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:15.193 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:15.193 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:15.193 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:15.193 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:15.193 Initialization complete. Launching workers. 00:10:15.193 Starting thread on core 1 with urgent priority queue 00:10:15.193 Starting thread on core 2 with urgent priority queue 00:10:15.193 Starting thread on core 3 with urgent priority queue 00:10:15.193 Starting thread on core 0 with urgent priority queue 00:10:15.193 QEMU NVMe Ctrl (12340 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:10:15.193 QEMU NVMe Ctrl (12342 ) core 0: 554.67 IO/s 180.29 secs/100000 ios 00:10:15.193 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:10:15.193 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:10:15.193 QEMU NVMe Ctrl (12343 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:10:15.193 QEMU NVMe Ctrl (12342 ) core 3: 640.00 IO/s 156.25 secs/100000 ios 00:10:15.193 ======================================================== 00:10:15.193 00:10:15.193 00:10:15.193 real 0m3.437s 00:10:15.193 user 0m9.402s 00:10:15.193 sys 0m0.174s 00:10:15.193 17:59:44 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.193 ************************************ 00:10:15.193 END TEST nvme_arbitration 00:10:15.193 ************************************ 00:10:15.193 17:59:44 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:15.193 17:59:44 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:15.193 17:59:44 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:15.193 17:59:44 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:15.193 17:59:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:15.193 ************************************ 00:10:15.193 START TEST nvme_single_aen 00:10:15.193 ************************************ 00:10:15.193 17:59:44 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:15.453 Asynchronous Event Request test 00:10:15.453 Attached to 0000:00:10.0 00:10:15.453 Attached to 0000:00:11.0 00:10:15.453 Attached to 0000:00:13.0 00:10:15.453 Attached to 0000:00:12.0 00:10:15.453 Reset controller to setup AER completions for this process 00:10:15.453 Registering asynchronous event callbacks... 00:10:15.453 Getting orig temperature thresholds of all controllers 00:10:15.453 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:15.453 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:15.453 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:15.453 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:15.453 Setting all controllers temperature threshold low to trigger AER 00:10:15.453 Waiting for all controllers temperature threshold to be set lower 00:10:15.453 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:15.453 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:15.453 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:15.453 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:15.453 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:15.453 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:15.453 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:15.453 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:15.453 Waiting for all controllers to trigger AER and reset threshold 00:10:15.453 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:15.453 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:15.453 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:15.453 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:15.453 Cleaning up... 00:10:15.453 00:10:15.453 real 0m0.285s 00:10:15.453 user 0m0.093s 00:10:15.453 sys 0m0.155s 00:10:15.453 17:59:44 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:15.453 17:59:44 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:15.453 ************************************ 00:10:15.453 END TEST nvme_single_aen 00:10:15.453 ************************************ 00:10:15.713 17:59:44 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:15.713 17:59:44 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:15.713 17:59:44 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:15.713 17:59:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:15.713 ************************************ 00:10:15.713 START TEST nvme_doorbell_aers 00:10:15.713 ************************************ 00:10:15.713 17:59:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:10:15.713 17:59:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:15.713 17:59:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:15.713 17:59:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:15.713 17:59:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:15.713 17:59:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:15.713 17:59:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:10:15.713 17:59:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:15.713 17:59:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:15.713 17:59:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:15.713 17:59:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:15.713 17:59:44 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:15.713 17:59:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:15.713 17:59:44 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:15.972 [2024-11-05 17:59:45.227263] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:25.955 Executing: test_write_invalid_db 00:10:25.955 Waiting for AER completion... 00:10:25.955 Failure: test_write_invalid_db 00:10:25.955 00:10:25.955 Executing: test_invalid_db_write_overflow_sq 00:10:25.955 Waiting for AER completion... 00:10:25.955 Failure: test_invalid_db_write_overflow_sq 00:10:25.955 00:10:25.955 Executing: test_invalid_db_write_overflow_cq 00:10:25.955 Waiting for AER completion... 00:10:25.955 Failure: test_invalid_db_write_overflow_cq 00:10:25.955 00:10:25.955 17:59:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:25.955 17:59:54 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:25.955 [2024-11-05 17:59:55.278490] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:35.937 Executing: test_write_invalid_db 00:10:35.937 Waiting for AER completion... 00:10:35.937 Failure: test_write_invalid_db 00:10:35.937 00:10:35.937 Executing: test_invalid_db_write_overflow_sq 00:10:35.937 Waiting for AER completion... 00:10:35.937 Failure: test_invalid_db_write_overflow_sq 00:10:35.937 00:10:35.937 Executing: test_invalid_db_write_overflow_cq 00:10:35.937 Waiting for AER completion... 00:10:35.937 Failure: test_invalid_db_write_overflow_cq 00:10:35.937 00:10:35.937 18:00:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:35.937 18:00:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:36.195 [2024-11-05 18:00:05.344441] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:46.177 Executing: test_write_invalid_db 00:10:46.177 Waiting for AER completion... 00:10:46.178 Failure: test_write_invalid_db 00:10:46.178 00:10:46.178 Executing: test_invalid_db_write_overflow_sq 00:10:46.178 Waiting for AER completion... 00:10:46.178 Failure: test_invalid_db_write_overflow_sq 00:10:46.178 00:10:46.178 Executing: test_invalid_db_write_overflow_cq 00:10:46.178 Waiting for AER completion... 00:10:46.178 Failure: test_invalid_db_write_overflow_cq 00:10:46.178 00:10:46.178 18:00:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:46.178 18:00:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:46.178 [2024-11-05 18:00:15.400109] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:56.192 Executing: test_write_invalid_db 00:10:56.192 Waiting for AER completion... 00:10:56.192 Failure: test_write_invalid_db 00:10:56.192 00:10:56.192 Executing: test_invalid_db_write_overflow_sq 00:10:56.192 Waiting for AER completion... 00:10:56.192 Failure: test_invalid_db_write_overflow_sq 00:10:56.192 00:10:56.192 Executing: test_invalid_db_write_overflow_cq 00:10:56.192 Waiting for AER completion... 00:10:56.192 Failure: test_invalid_db_write_overflow_cq 00:10:56.192 00:10:56.192 00:10:56.192 real 0m40.338s 00:10:56.192 user 0m28.636s 00:10:56.192 sys 0m11.337s 00:10:56.192 18:00:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:56.192 18:00:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:56.192 ************************************ 00:10:56.192 END TEST nvme_doorbell_aers 00:10:56.192 ************************************ 00:10:56.192 18:00:25 nvme -- nvme/nvme.sh@97 -- # uname 00:10:56.192 18:00:25 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:56.192 18:00:25 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:56.192 18:00:25 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:10:56.192 18:00:25 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:56.192 18:00:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:56.192 ************************************ 00:10:56.192 START TEST nvme_multi_aen 00:10:56.192 ************************************ 00:10:56.192 18:00:25 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:56.192 [2024-11-05 18:00:25.504552] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:56.192 [2024-11-05 18:00:25.504652] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:56.192 [2024-11-05 18:00:25.504670] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:56.192 [2024-11-05 18:00:25.506634] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:56.192 [2024-11-05 18:00:25.506683] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:56.192 [2024-11-05 18:00:25.506709] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:56.192 [2024-11-05 18:00:25.508273] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:56.192 [2024-11-05 18:00:25.508313] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:56.192 [2024-11-05 18:00:25.508331] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:56.192 [2024-11-05 18:00:25.509774] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:56.192 [2024-11-05 18:00:25.509815] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:56.192 [2024-11-05 18:00:25.509829] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64338) is not found. Dropping the request. 00:10:56.192 Child process pid: 64860 00:10:56.761 [Child] Asynchronous Event Request test 00:10:56.761 [Child] Attached to 0000:00:10.0 00:10:56.761 [Child] Attached to 0000:00:11.0 00:10:56.761 [Child] Attached to 0000:00:13.0 00:10:56.761 [Child] Attached to 0000:00:12.0 00:10:56.761 [Child] Registering asynchronous event callbacks... 00:10:56.761 [Child] Getting orig temperature thresholds of all controllers 00:10:56.761 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.761 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.761 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.761 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.761 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:56.761 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.761 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.761 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.761 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.761 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.761 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.761 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.761 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.761 [Child] Cleaning up... 00:10:56.761 Asynchronous Event Request test 00:10:56.761 Attached to 0000:00:10.0 00:10:56.761 Attached to 0000:00:11.0 00:10:56.761 Attached to 0000:00:13.0 00:10:56.761 Attached to 0000:00:12.0 00:10:56.761 Reset controller to setup AER completions for this process 00:10:56.761 Registering asynchronous event callbacks... 00:10:56.761 Getting orig temperature thresholds of all controllers 00:10:56.761 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.761 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.761 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.761 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:56.761 Setting all controllers temperature threshold low to trigger AER 00:10:56.761 Waiting for all controllers temperature threshold to be set lower 00:10:56.761 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.761 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:56.761 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.761 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:56.761 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.761 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:56.761 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:56.761 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:56.761 Waiting for all controllers to trigger AER and reset threshold 00:10:56.761 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.761 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.761 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.761 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:56.761 Cleaning up... 00:10:56.761 00:10:56.761 real 0m0.622s 00:10:56.761 user 0m0.219s 00:10:56.761 sys 0m0.293s 00:10:56.761 18:00:25 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:56.761 18:00:25 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:56.761 ************************************ 00:10:56.761 END TEST nvme_multi_aen 00:10:56.761 ************************************ 00:10:56.761 18:00:25 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:56.761 18:00:25 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:56.761 18:00:25 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:56.761 18:00:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:56.761 ************************************ 00:10:56.761 START TEST nvme_startup 00:10:56.761 ************************************ 00:10:56.761 18:00:25 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:57.021 Initializing NVMe Controllers 00:10:57.021 Attached to 0000:00:10.0 00:10:57.021 Attached to 0000:00:11.0 00:10:57.021 Attached to 0000:00:13.0 00:10:57.021 Attached to 0000:00:12.0 00:10:57.021 Initialization complete. 00:10:57.021 Time used:181731.172 (us). 00:10:57.021 00:10:57.021 real 0m0.286s 00:10:57.021 user 0m0.100s 00:10:57.021 sys 0m0.143s 00:10:57.021 18:00:26 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:57.021 18:00:26 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:57.021 ************************************ 00:10:57.021 END TEST nvme_startup 00:10:57.021 ************************************ 00:10:57.021 18:00:26 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:57.021 18:00:26 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:57.021 18:00:26 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:57.021 18:00:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:57.021 ************************************ 00:10:57.021 START TEST nvme_multi_secondary 00:10:57.021 ************************************ 00:10:57.021 18:00:26 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:10:57.021 18:00:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64916 00:10:57.021 18:00:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:57.021 18:00:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64917 00:10:57.021 18:00:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:57.021 18:00:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:01.213 Initializing NVMe Controllers 00:11:01.213 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:01.213 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:01.213 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:01.213 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:01.213 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:01.213 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:01.213 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:01.213 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:01.213 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:01.213 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:01.213 Initialization complete. Launching workers. 00:11:01.213 ======================================================== 00:11:01.213 Latency(us) 00:11:01.214 Device Information : IOPS MiB/s Average min max 00:11:01.214 PCIE (0000:00:10.0) NSID 1 from core 1: 4715.89 18.42 3390.30 1714.01 7194.71 00:11:01.214 PCIE (0000:00:11.0) NSID 1 from core 1: 4715.89 18.42 3392.28 1631.64 6945.17 00:11:01.214 PCIE (0000:00:13.0) NSID 1 from core 1: 4715.89 18.42 3392.76 1687.46 7089.82 00:11:01.214 PCIE (0000:00:12.0) NSID 1 from core 1: 4715.89 18.42 3392.86 1777.63 6926.47 00:11:01.214 PCIE (0000:00:12.0) NSID 2 from core 1: 4715.89 18.42 3392.94 1804.56 6842.04 00:11:01.214 PCIE (0000:00:12.0) NSID 3 from core 1: 4715.89 18.42 3393.04 1823.55 6956.45 00:11:01.214 ======================================================== 00:11:01.214 Total : 28295.33 110.53 3392.36 1631.64 7194.71 00:11:01.214 00:11:01.214 Initializing NVMe Controllers 00:11:01.214 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:01.214 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:01.214 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:01.214 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:01.214 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:01.214 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:01.214 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:01.214 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:01.214 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:01.214 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:01.214 Initialization complete. Launching workers. 00:11:01.214 ======================================================== 00:11:01.214 Latency(us) 00:11:01.214 Device Information : IOPS MiB/s Average min max 00:11:01.214 PCIE (0000:00:10.0) NSID 1 from core 2: 3467.70 13.55 4612.83 1137.31 13700.65 00:11:01.214 PCIE (0000:00:11.0) NSID 1 from core 2: 3467.70 13.55 4613.57 1033.74 13207.22 00:11:01.214 PCIE (0000:00:13.0) NSID 1 from core 2: 3467.70 13.55 4608.44 1148.16 13536.25 00:11:01.214 PCIE (0000:00:12.0) NSID 1 from core 2: 3467.70 13.55 4607.23 1163.95 13689.57 00:11:01.214 PCIE (0000:00:12.0) NSID 2 from core 2: 3467.70 13.55 4607.18 941.87 13396.81 00:11:01.214 PCIE (0000:00:12.0) NSID 3 from core 2: 3467.70 13.55 4607.06 946.02 13619.29 00:11:01.214 ======================================================== 00:11:01.214 Total : 20806.17 81.27 4609.39 941.87 13700.65 00:11:01.214 00:11:01.214 18:00:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64916 00:11:02.594 Initializing NVMe Controllers 00:11:02.594 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:02.594 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:02.594 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:02.594 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:02.594 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:02.594 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:02.594 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:02.594 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:02.594 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:02.594 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:02.594 Initialization complete. Launching workers. 00:11:02.594 ======================================================== 00:11:02.594 Latency(us) 00:11:02.594 Device Information : IOPS MiB/s Average min max 00:11:02.594 PCIE (0000:00:10.0) NSID 1 from core 0: 8008.03 31.28 1996.48 919.04 9375.70 00:11:02.594 PCIE (0000:00:11.0) NSID 1 from core 0: 8008.03 31.28 1997.54 955.16 11019.36 00:11:02.594 PCIE (0000:00:13.0) NSID 1 from core 0: 8008.03 31.28 1997.49 941.66 9705.57 00:11:02.594 PCIE (0000:00:12.0) NSID 1 from core 0: 8008.03 31.28 1997.46 904.63 9673.81 00:11:02.594 PCIE (0000:00:12.0) NSID 2 from core 0: 8008.03 31.28 1997.42 818.02 9001.83 00:11:02.594 PCIE (0000:00:12.0) NSID 3 from core 0: 8011.22 31.29 1996.59 749.95 9622.64 00:11:02.594 ======================================================== 00:11:02.594 Total : 48051.35 187.70 1997.16 749.95 11019.36 00:11:02.594 00:11:02.594 18:00:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64917 00:11:02.594 18:00:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64986 00:11:02.594 18:00:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:02.594 18:00:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64987 00:11:02.594 18:00:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:02.594 18:00:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:06.113 Initializing NVMe Controllers 00:11:06.113 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:06.113 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:06.113 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:06.113 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:06.113 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:06.113 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:06.113 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:06.113 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:06.113 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:06.113 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:06.113 Initialization complete. Launching workers. 00:11:06.113 ======================================================== 00:11:06.113 Latency(us) 00:11:06.113 Device Information : IOPS MiB/s Average min max 00:11:06.113 PCIE (0000:00:10.0) NSID 1 from core 1: 5526.23 21.59 2893.15 964.24 5641.81 00:11:06.113 PCIE (0000:00:11.0) NSID 1 from core 1: 5526.23 21.59 2895.18 970.60 6226.04 00:11:06.113 PCIE (0000:00:13.0) NSID 1 from core 1: 5526.23 21.59 2895.27 953.25 6528.09 00:11:06.113 PCIE (0000:00:12.0) NSID 1 from core 1: 5526.23 21.59 2895.43 986.58 6771.22 00:11:06.113 PCIE (0000:00:12.0) NSID 2 from core 1: 5526.23 21.59 2895.77 981.78 6714.60 00:11:06.113 PCIE (0000:00:12.0) NSID 3 from core 1: 5531.56 21.61 2893.08 985.42 6019.18 00:11:06.113 ======================================================== 00:11:06.113 Total : 33162.74 129.54 2894.65 953.25 6771.22 00:11:06.113 00:11:06.113 Initializing NVMe Controllers 00:11:06.113 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:06.113 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:06.113 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:06.113 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:06.113 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:06.113 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:06.113 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:06.113 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:06.113 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:06.113 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:06.113 Initialization complete. Launching workers. 00:11:06.113 ======================================================== 00:11:06.113 Latency(us) 00:11:06.113 Device Information : IOPS MiB/s Average min max 00:11:06.113 PCIE (0000:00:10.0) NSID 1 from core 0: 5220.73 20.39 3062.28 983.77 7678.03 00:11:06.113 PCIE (0000:00:11.0) NSID 1 from core 0: 5220.73 20.39 3063.98 1015.56 7718.80 00:11:06.113 PCIE (0000:00:13.0) NSID 1 from core 0: 5220.73 20.39 3063.93 973.10 7251.56 00:11:06.113 PCIE (0000:00:12.0) NSID 1 from core 0: 5220.73 20.39 3063.86 972.70 7271.47 00:11:06.113 PCIE (0000:00:12.0) NSID 2 from core 0: 5220.73 20.39 3063.75 956.69 7331.49 00:11:06.113 PCIE (0000:00:12.0) NSID 3 from core 0: 5220.73 20.39 3063.64 913.25 7301.09 00:11:06.113 ======================================================== 00:11:06.113 Total : 31324.40 122.36 3063.57 913.25 7718.80 00:11:06.113 00:11:08.020 Initializing NVMe Controllers 00:11:08.020 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:08.020 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:08.020 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:08.020 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:08.020 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:08.020 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:08.020 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:08.020 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:08.020 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:08.020 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:08.020 Initialization complete. Launching workers. 00:11:08.020 ======================================================== 00:11:08.020 Latency(us) 00:11:08.020 Device Information : IOPS MiB/s Average min max 00:11:08.020 PCIE (0000:00:10.0) NSID 1 from core 2: 3274.25 12.79 4884.68 1208.41 12107.30 00:11:08.020 PCIE (0000:00:11.0) NSID 1 from core 2: 3274.25 12.79 4886.36 1237.86 10951.03 00:11:08.020 PCIE (0000:00:13.0) NSID 1 from core 2: 3274.25 12.79 4886.26 1213.40 10685.85 00:11:08.020 PCIE (0000:00:12.0) NSID 1 from core 2: 3274.25 12.79 4885.93 1188.62 12187.18 00:11:08.020 PCIE (0000:00:12.0) NSID 2 from core 2: 3274.25 12.79 4885.86 1173.32 12703.29 00:11:08.020 PCIE (0000:00:12.0) NSID 3 from core 2: 3274.25 12.79 4885.80 1142.58 13327.90 00:11:08.020 ======================================================== 00:11:08.020 Total : 19645.48 76.74 4885.82 1142.58 13327.90 00:11:08.020 00:11:08.020 18:00:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64986 00:11:08.020 18:00:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64987 00:11:08.020 00:11:08.020 real 0m10.864s 00:11:08.020 user 0m18.549s 00:11:08.020 sys 0m1.062s 00:11:08.020 18:00:37 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:08.020 18:00:37 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:08.020 ************************************ 00:11:08.020 END TEST nvme_multi_secondary 00:11:08.020 ************************************ 00:11:08.020 18:00:37 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:08.020 18:00:37 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:08.020 18:00:37 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/63923 ]] 00:11:08.020 18:00:37 nvme -- common/autotest_common.sh@1092 -- # kill 63923 00:11:08.020 18:00:37 nvme -- common/autotest_common.sh@1093 -- # wait 63923 00:11:08.020 [2024-11-05 18:00:37.222987] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.020 [2024-11-05 18:00:37.223161] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.021 [2024-11-05 18:00:37.223240] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.021 [2024-11-05 18:00:37.223294] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.021 [2024-11-05 18:00:37.228913] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.021 [2024-11-05 18:00:37.229003] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.021 [2024-11-05 18:00:37.229040] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.021 [2024-11-05 18:00:37.229080] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.021 [2024-11-05 18:00:37.234541] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.021 [2024-11-05 18:00:37.234602] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.021 [2024-11-05 18:00:37.234626] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.021 [2024-11-05 18:00:37.234652] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.021 [2024-11-05 18:00:37.238376] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.021 [2024-11-05 18:00:37.238457] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.021 [2024-11-05 18:00:37.238481] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.021 [2024-11-05 18:00:37.238507] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64856) is not found. Dropping the request. 00:11:08.280 18:00:37 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:11:08.280 18:00:37 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:11:08.280 18:00:37 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:08.280 18:00:37 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:08.280 18:00:37 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:08.280 18:00:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:08.280 ************************************ 00:11:08.280 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:08.280 ************************************ 00:11:08.280 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:08.280 * Looking for test storage... 00:11:08.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:08.280 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:08.280 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:11:08.280 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:08.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.540 --rc genhtml_branch_coverage=1 00:11:08.540 --rc genhtml_function_coverage=1 00:11:08.540 --rc genhtml_legend=1 00:11:08.540 --rc geninfo_all_blocks=1 00:11:08.540 --rc geninfo_unexecuted_blocks=1 00:11:08.540 00:11:08.540 ' 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:08.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.540 --rc genhtml_branch_coverage=1 00:11:08.540 --rc genhtml_function_coverage=1 00:11:08.540 --rc genhtml_legend=1 00:11:08.540 --rc geninfo_all_blocks=1 00:11:08.540 --rc geninfo_unexecuted_blocks=1 00:11:08.540 00:11:08.540 ' 00:11:08.540 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:08.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.540 --rc genhtml_branch_coverage=1 00:11:08.540 --rc genhtml_function_coverage=1 00:11:08.540 --rc genhtml_legend=1 00:11:08.540 --rc geninfo_all_blocks=1 00:11:08.541 --rc geninfo_unexecuted_blocks=1 00:11:08.541 00:11:08.541 ' 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:08.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.541 --rc genhtml_branch_coverage=1 00:11:08.541 --rc genhtml_function_coverage=1 00:11:08.541 --rc genhtml_legend=1 00:11:08.541 --rc geninfo_all_blocks=1 00:11:08.541 --rc geninfo_unexecuted_blocks=1 00:11:08.541 00:11:08.541 ' 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65153 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65153 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 65153 ']' 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:08.541 18:00:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:08.800 [2024-11-05 18:00:37.884062] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:11:08.800 [2024-11-05 18:00:37.884201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65153 ] 00:11:08.800 [2024-11-05 18:00:38.089315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.060 [2024-11-05 18:00:38.197305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.060 [2024-11-05 18:00:38.197488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.060 [2024-11-05 18:00:38.197673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.060 [2024-11-05 18:00:38.197697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:09.998 nvme0n1 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_1FLK5.txt 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:09.998 true 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730829639 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65176 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:09.998 18:00:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:11.903 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:11.903 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.903 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:11.903 [2024-11-05 18:00:41.202904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:11.903 [2024-11-05 18:00:41.204063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:11.903 [2024-11-05 18:00:41.204545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:11.903 [2024-11-05 18:00:41.204958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.903 [2024-11-05 18:00:41.209646] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:11.903 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.903 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65176 00:11:11.903 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65176 00:11:11.903 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65176 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_1FLK5.txt 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_1FLK5.txt 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65153 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 65153 ']' 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 65153 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65153 00:11:12.163 killing process with pid 65153 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65153' 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 65153 00:11:12.163 18:00:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 65153 00:11:14.749 18:00:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:14.749 18:00:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:14.749 00:11:14.749 real 0m6.345s 00:11:14.749 user 0m22.122s 00:11:14.749 sys 0m0.792s 00:11:14.749 18:00:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.749 ************************************ 00:11:14.749 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:14.749 ************************************ 00:11:14.749 18:00:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:14.749 18:00:43 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:14.749 18:00:43 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:14.749 18:00:43 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:14.749 18:00:43 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:14.749 18:00:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:14.749 ************************************ 00:11:14.749 START TEST nvme_fio 00:11:14.749 ************************************ 00:11:14.749 18:00:43 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:11:14.749 18:00:43 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:14.749 18:00:43 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:14.749 18:00:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:14.749 18:00:43 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:14.749 18:00:43 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:11:14.749 18:00:43 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:14.749 18:00:43 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:14.749 18:00:43 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:14.749 18:00:43 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:14.749 18:00:43 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:14.749 18:00:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:14.749 18:00:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:14.749 18:00:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:14.749 18:00:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:14.749 18:00:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:15.009 18:00:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:15.009 18:00:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:15.268 18:00:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:15.268 18:00:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:15.268 18:00:44 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:15.527 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:15.527 fio-3.35 00:11:15.527 Starting 1 thread 00:11:19.722 00:11:19.722 test: (groupid=0, jobs=1): err= 0: pid=65327: Tue Nov 5 18:00:48 2024 00:11:19.722 read: IOPS=22.5k, BW=87.8MiB/s (92.1MB/s)(176MiB/2001msec) 00:11:19.722 slat (usec): min=3, max=269, avg= 4.58, stdev= 1.76 00:11:19.722 clat (usec): min=219, max=10234, avg=2841.70, stdev=240.80 00:11:19.722 lat (usec): min=223, max=10290, avg=2846.28, stdev=241.11 00:11:19.722 clat percentiles (usec): 00:11:19.722 | 1.00th=[ 2507], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2737], 00:11:19.722 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:11:19.722 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3064], 00:11:19.722 | 99.00th=[ 3425], 99.50th=[ 3621], 99.90th=[ 5407], 99.95th=[ 7898], 00:11:19.722 | 99.99th=[10028] 00:11:19.722 bw ( KiB/s): min=87096, max=90464, per=99.14%, avg=89128.00, stdev=1788.62, samples=3 00:11:19.722 iops : min=21774, max=22616, avg=22282.00, stdev=447.16, samples=3 00:11:19.722 write: IOPS=22.3k, BW=87.3MiB/s (91.5MB/s)(175MiB/2001msec); 0 zone resets 00:11:19.722 slat (usec): min=4, max=348, avg= 4.77, stdev= 3.10 00:11:19.722 clat (usec): min=203, max=10146, avg=2844.49, stdev=245.04 00:11:19.722 lat (usec): min=207, max=10159, avg=2849.26, stdev=245.35 00:11:19.722 clat percentiles (usec): 00:11:19.722 | 1.00th=[ 2507], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2737], 00:11:19.722 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:11:19.722 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3032], 00:11:19.722 | 99.00th=[ 3425], 99.50th=[ 3621], 99.90th=[ 6194], 99.95th=[ 8094], 00:11:19.722 | 99.99th=[ 9896] 00:11:19.722 bw ( KiB/s): min=86800, max=90904, per=99.95%, avg=89320.00, stdev=2206.30, samples=3 00:11:19.722 iops : min=21700, max=22726, avg=22330.00, stdev=551.58, samples=3 00:11:19.722 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:11:19.722 lat (msec) : 2=0.17%, 4=99.47%, 10=0.28%, 20=0.01% 00:11:19.722 cpu : usr=98.85%, sys=0.20%, ctx=39, majf=0, minf=607 00:11:19.722 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:19.722 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:19.722 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:19.722 issued rwts: total=44973,44703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:19.722 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:19.722 00:11:19.722 Run status group 0 (all jobs): 00:11:19.722 READ: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:11:19.722 WRITE: bw=87.3MiB/s (91.5MB/s), 87.3MiB/s-87.3MiB/s (91.5MB/s-91.5MB/s), io=175MiB (183MB), run=2001-2001msec 00:11:19.722 ----------------------------------------------------- 00:11:19.722 Suppressions used: 00:11:19.722 count bytes template 00:11:19.722 1 32 /usr/src/fio/parse.c 00:11:19.722 1 8 libtcmalloc_minimal.so 00:11:19.722 ----------------------------------------------------- 00:11:19.722 00:11:19.722 18:00:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:19.722 18:00:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:19.722 18:00:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:19.722 18:00:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:19.722 18:00:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:19.722 18:00:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:19.982 18:00:49 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:19.982 18:00:49 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:19.982 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:20.241 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:20.241 fio-3.35 00:11:20.241 Starting 1 thread 00:11:24.457 00:11:24.457 test: (groupid=0, jobs=1): err= 0: pid=65393: Tue Nov 5 18:00:53 2024 00:11:24.457 read: IOPS=22.3k, BW=87.0MiB/s (91.2MB/s)(174MiB/2001msec) 00:11:24.457 slat (nsec): min=3789, max=55909, avg=4551.32, stdev=1257.66 00:11:24.457 clat (usec): min=230, max=10399, avg=2872.91, stdev=532.03 00:11:24.457 lat (usec): min=234, max=10455, avg=2877.46, stdev=532.71 00:11:24.457 clat percentiles (usec): 00:11:24.457 | 1.00th=[ 2442], 5.00th=[ 2573], 10.00th=[ 2638], 20.00th=[ 2671], 00:11:24.457 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:11:24.457 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2999], 95.00th=[ 3195], 00:11:24.457 | 99.00th=[ 5538], 99.50th=[ 6849], 99.90th=[ 8356], 99.95th=[ 8455], 00:11:24.457 | 99.99th=[10159] 00:11:24.457 bw ( KiB/s): min=81572, max=93104, per=99.44%, avg=88545.33, stdev=6133.49, samples=3 00:11:24.457 iops : min=20393, max=23276, avg=22136.33, stdev=1533.37, samples=3 00:11:24.457 write: IOPS=22.1k, BW=86.4MiB/s (90.6MB/s)(173MiB/2001msec); 0 zone resets 00:11:24.457 slat (usec): min=3, max=157, avg= 4.73, stdev= 1.47 00:11:24.457 clat (usec): min=198, max=10277, avg=2870.47, stdev=525.80 00:11:24.457 lat (usec): min=204, max=10290, avg=2875.20, stdev=526.49 00:11:24.457 clat percentiles (usec): 00:11:24.457 | 1.00th=[ 2442], 5.00th=[ 2573], 10.00th=[ 2638], 20.00th=[ 2671], 00:11:24.457 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:11:24.457 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2999], 95.00th=[ 3195], 00:11:24.457 | 99.00th=[ 5473], 99.50th=[ 6783], 99.90th=[ 8455], 99.95th=[ 8586], 00:11:24.457 | 99.99th=[ 9896] 00:11:24.457 bw ( KiB/s): min=81285, max=93968, per=100.00%, avg=88737.67, stdev=6627.12, samples=3 00:11:24.457 iops : min=20321, max=23492, avg=22184.33, stdev=1656.92, samples=3 00:11:24.457 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:11:24.457 lat (msec) : 2=0.14%, 4=97.22%, 10=2.59%, 20=0.01% 00:11:24.457 cpu : usr=99.30%, sys=0.15%, ctx=3, majf=0, minf=607 00:11:24.457 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:24.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.457 issued rwts: total=44546,44243,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.457 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.457 00:11:24.457 Run status group 0 (all jobs): 00:11:24.457 READ: bw=87.0MiB/s (91.2MB/s), 87.0MiB/s-87.0MiB/s (91.2MB/s-91.2MB/s), io=174MiB (182MB), run=2001-2001msec 00:11:24.457 WRITE: bw=86.4MiB/s (90.6MB/s), 86.4MiB/s-86.4MiB/s (90.6MB/s-90.6MB/s), io=173MiB (181MB), run=2001-2001msec 00:11:24.457 ----------------------------------------------------- 00:11:24.457 Suppressions used: 00:11:24.457 count bytes template 00:11:24.457 1 32 /usr/src/fio/parse.c 00:11:24.457 1 8 libtcmalloc_minimal.so 00:11:24.457 ----------------------------------------------------- 00:11:24.457 00:11:24.457 18:00:53 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:24.457 18:00:53 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:24.457 18:00:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:24.457 18:00:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:24.457 18:00:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:24.457 18:00:53 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:24.716 18:00:54 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:24.716 18:00:54 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:24.716 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:24.716 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:24.716 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:24.716 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:24.716 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:24.716 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:24.716 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:24.716 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:24.716 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:24.716 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:24.716 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:24.976 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:24.976 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:24.976 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:24.976 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:24.976 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:24.976 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:24.976 fio-3.35 00:11:24.976 Starting 1 thread 00:11:29.169 00:11:29.169 test: (groupid=0, jobs=1): err= 0: pid=65460: Tue Nov 5 18:00:58 2024 00:11:29.169 read: IOPS=22.5k, BW=87.7MiB/s (92.0MB/s)(176MiB/2001msec) 00:11:29.169 slat (nsec): min=3889, max=49842, avg=4622.46, stdev=1173.13 00:11:29.169 clat (usec): min=219, max=10899, avg=2848.08, stdev=451.67 00:11:29.169 lat (usec): min=223, max=10945, avg=2852.71, stdev=452.29 00:11:29.169 clat percentiles (usec): 00:11:29.169 | 1.00th=[ 2311], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2671], 00:11:29.169 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:11:29.169 | 70.00th=[ 2835], 80.00th=[ 2900], 90.00th=[ 3228], 95.00th=[ 3392], 00:11:29.169 | 99.00th=[ 4555], 99.50th=[ 5538], 99.90th=[ 8225], 99.95th=[ 8455], 00:11:29.169 | 99.99th=[10552] 00:11:29.169 bw ( KiB/s): min=88800, max=93376, per=100.00%, avg=90808.00, stdev=2338.83, samples=3 00:11:29.169 iops : min=22200, max=23344, avg=22702.00, stdev=584.71, samples=3 00:11:29.169 write: IOPS=22.3k, BW=87.2MiB/s (91.4MB/s)(174MiB/2001msec); 0 zone resets 00:11:29.169 slat (nsec): min=3985, max=33745, avg=4750.42, stdev=1123.80 00:11:29.169 clat (usec): min=234, max=10742, avg=2841.85, stdev=435.79 00:11:29.169 lat (usec): min=238, max=10754, avg=2846.60, stdev=436.34 00:11:29.169 clat percentiles (usec): 00:11:29.169 | 1.00th=[ 2311], 5.00th=[ 2573], 10.00th=[ 2638], 20.00th=[ 2671], 00:11:29.169 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:11:29.169 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 3195], 95.00th=[ 3392], 00:11:29.169 | 99.00th=[ 4293], 99.50th=[ 5145], 99.90th=[ 8225], 99.95th=[ 8848], 00:11:29.169 | 99.99th=[10421] 00:11:29.169 bw ( KiB/s): min=88176, max=94520, per=100.00%, avg=90978.67, stdev=3235.86, samples=3 00:11:29.169 iops : min=22044, max=23630, avg=22744.67, stdev=808.97, samples=3 00:11:29.169 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:29.169 lat (msec) : 2=0.29%, 4=98.39%, 10=1.26%, 20=0.02% 00:11:29.169 cpu : usr=99.15%, sys=0.25%, ctx=5, majf=0, minf=607 00:11:29.169 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:29.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.169 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.169 issued rwts: total=44947,44669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.169 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.169 00:11:29.169 Run status group 0 (all jobs): 00:11:29.169 READ: bw=87.7MiB/s (92.0MB/s), 87.7MiB/s-87.7MiB/s (92.0MB/s-92.0MB/s), io=176MiB (184MB), run=2001-2001msec 00:11:29.169 WRITE: bw=87.2MiB/s (91.4MB/s), 87.2MiB/s-87.2MiB/s (91.4MB/s-91.4MB/s), io=174MiB (183MB), run=2001-2001msec 00:11:29.169 ----------------------------------------------------- 00:11:29.169 Suppressions used: 00:11:29.169 count bytes template 00:11:29.169 1 32 /usr/src/fio/parse.c 00:11:29.169 1 8 libtcmalloc_minimal.so 00:11:29.169 ----------------------------------------------------- 00:11:29.169 00:11:29.169 18:00:58 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:29.169 18:00:58 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:29.169 18:00:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:29.169 18:00:58 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:29.429 18:00:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:29.429 18:00:58 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:29.688 18:00:58 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:29.688 18:00:58 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:29.688 18:00:58 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:29.947 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:29.947 fio-3.35 00:11:29.947 Starting 1 thread 00:11:35.243 00:11:35.243 test: (groupid=0, jobs=1): err= 0: pid=65521: Tue Nov 5 18:01:04 2024 00:11:35.243 read: IOPS=22.9k, BW=89.3MiB/s (93.7MB/s)(179MiB/2001msec) 00:11:35.243 slat (nsec): min=3838, max=32710, avg=4463.26, stdev=732.41 00:11:35.243 clat (usec): min=198, max=10927, avg=2805.52, stdev=288.77 00:11:35.243 lat (usec): min=202, max=10960, avg=2809.98, stdev=288.84 00:11:35.243 clat percentiles (usec): 00:11:35.243 | 1.00th=[ 2573], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:11:35.243 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2769], 60.00th=[ 2802], 00:11:35.243 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 2966], 00:11:35.243 | 99.00th=[ 3261], 99.50th=[ 3556], 99.90th=[ 7767], 99.95th=[ 8848], 00:11:35.243 | 99.99th=[10814] 00:11:35.243 bw ( KiB/s): min=89868, max=91256, per=99.23%, avg=90772.00, stdev=783.54, samples=3 00:11:35.243 iops : min=22467, max=22814, avg=22693.00, stdev=195.89, samples=3 00:11:35.243 write: IOPS=22.7k, BW=88.8MiB/s (93.1MB/s)(178MiB/2001msec); 0 zone resets 00:11:35.243 slat (nsec): min=3973, max=26957, avg=4663.37, stdev=801.97 00:11:35.243 clat (usec): min=190, max=10876, avg=2789.47, stdev=282.30 00:11:35.243 lat (usec): min=194, max=10882, avg=2794.14, stdev=282.35 00:11:35.243 clat percentiles (usec): 00:11:35.243 | 1.00th=[ 2573], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:11:35.243 | 30.00th=[ 2737], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:11:35.243 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2900], 95.00th=[ 2933], 00:11:35.243 | 99.00th=[ 3195], 99.50th=[ 3392], 99.90th=[ 8291], 99.95th=[ 8979], 00:11:35.243 | 99.99th=[10814] 00:11:35.244 bw ( KiB/s): min=89245, max=92768, per=100.00%, avg=90951.00, stdev=1764.12, samples=3 00:11:35.244 iops : min=22311, max=23192, avg=22737.67, stdev=441.15, samples=3 00:11:35.244 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:11:35.244 lat (msec) : 2=0.09%, 4=99.52%, 10=0.32%, 20=0.03% 00:11:35.244 cpu : usr=99.30%, sys=0.20%, ctx=2, majf=0, minf=606 00:11:35.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:35.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:35.244 issued rwts: total=45762,45485,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.244 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:35.244 00:11:35.244 Run status group 0 (all jobs): 00:11:35.244 READ: bw=89.3MiB/s (93.7MB/s), 89.3MiB/s-89.3MiB/s (93.7MB/s-93.7MB/s), io=179MiB (187MB), run=2001-2001msec 00:11:35.244 WRITE: bw=88.8MiB/s (93.1MB/s), 88.8MiB/s-88.8MiB/s (93.1MB/s-93.1MB/s), io=178MiB (186MB), run=2001-2001msec 00:11:35.244 ----------------------------------------------------- 00:11:35.244 Suppressions used: 00:11:35.244 count bytes template 00:11:35.244 1 32 /usr/src/fio/parse.c 00:11:35.244 1 8 libtcmalloc_minimal.so 00:11:35.244 ----------------------------------------------------- 00:11:35.244 00:11:35.244 18:01:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:35.244 18:01:04 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:35.244 00:11:35.244 real 0m20.456s 00:11:35.244 user 0m14.991s 00:11:35.244 sys 0m7.067s 00:11:35.244 18:01:04 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:35.244 18:01:04 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:35.244 ************************************ 00:11:35.244 END TEST nvme_fio 00:11:35.244 ************************************ 00:11:35.244 00:11:35.244 real 1m35.447s 00:11:35.244 user 3m43.056s 00:11:35.244 sys 0m26.252s 00:11:35.244 18:01:04 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:35.244 18:01:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:35.244 ************************************ 00:11:35.244 END TEST nvme 00:11:35.244 ************************************ 00:11:35.244 18:01:04 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:11:35.244 18:01:04 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:35.244 18:01:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:35.244 18:01:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:35.244 18:01:04 -- common/autotest_common.sh@10 -- # set +x 00:11:35.244 ************************************ 00:11:35.244 START TEST nvme_scc 00:11:35.244 ************************************ 00:11:35.244 18:01:04 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:35.503 * Looking for test storage... 00:11:35.503 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:35.503 18:01:04 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:35.503 18:01:04 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:35.503 18:01:04 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:35.503 18:01:04 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:35.503 18:01:04 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.503 18:01:04 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.503 18:01:04 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.503 18:01:04 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@345 -- # : 1 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@368 -- # return 0 00:11:35.504 18:01:04 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.504 18:01:04 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:35.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.504 --rc genhtml_branch_coverage=1 00:11:35.504 --rc genhtml_function_coverage=1 00:11:35.504 --rc genhtml_legend=1 00:11:35.504 --rc geninfo_all_blocks=1 00:11:35.504 --rc geninfo_unexecuted_blocks=1 00:11:35.504 00:11:35.504 ' 00:11:35.504 18:01:04 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:35.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.504 --rc genhtml_branch_coverage=1 00:11:35.504 --rc genhtml_function_coverage=1 00:11:35.504 --rc genhtml_legend=1 00:11:35.504 --rc geninfo_all_blocks=1 00:11:35.504 --rc geninfo_unexecuted_blocks=1 00:11:35.504 00:11:35.504 ' 00:11:35.504 18:01:04 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:35.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.504 --rc genhtml_branch_coverage=1 00:11:35.504 --rc genhtml_function_coverage=1 00:11:35.504 --rc genhtml_legend=1 00:11:35.504 --rc geninfo_all_blocks=1 00:11:35.504 --rc geninfo_unexecuted_blocks=1 00:11:35.504 00:11:35.504 ' 00:11:35.504 18:01:04 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:35.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.504 --rc genhtml_branch_coverage=1 00:11:35.504 --rc genhtml_function_coverage=1 00:11:35.504 --rc genhtml_legend=1 00:11:35.504 --rc geninfo_all_blocks=1 00:11:35.504 --rc geninfo_unexecuted_blocks=1 00:11:35.504 00:11:35.504 ' 00:11:35.504 18:01:04 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:35.504 18:01:04 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:35.504 18:01:04 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:35.504 18:01:04 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:35.504 18:01:04 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.504 18:01:04 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.504 18:01:04 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.504 18:01:04 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.504 18:01:04 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.504 18:01:04 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:35.504 18:01:04 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.504 18:01:04 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:35.504 18:01:04 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:35.504 18:01:04 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:35.504 18:01:04 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:35.504 18:01:04 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:35.504 18:01:04 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:35.504 18:01:04 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:35.504 18:01:04 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:35.504 18:01:04 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:35.504 18:01:04 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.504 18:01:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:35.504 18:01:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:35.504 18:01:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:35.504 18:01:04 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:36.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:36.332 Waiting for block devices as requested 00:11:36.332 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:36.591 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:36.591 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:36.851 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:42.138 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:42.138 18:01:11 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:42.138 18:01:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:42.138 18:01:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:42.138 18:01:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:42.138 18:01:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.138 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:42.139 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.140 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:42.141 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.142 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:42.143 18:01:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:42.143 18:01:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:42.143 18:01:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:42.143 18:01:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:42.143 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:42.144 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.145 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.146 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:42.147 18:01:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:42.147 18:01:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:42.147 18:01:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:42.147 18:01:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.147 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.148 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:42.149 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.150 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:42.151 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:42.152 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.153 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:42.154 18:01:11 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:42.154 18:01:11 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:42.154 18:01:11 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:42.154 18:01:11 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.154 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:42.155 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.156 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:42.416 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:42.417 18:01:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:11:42.417 18:01:11 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:11:42.417 18:01:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:42.417 18:01:11 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:42.417 18:01:11 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:42.986 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:43.924 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:43.924 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:43.924 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:43.924 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:43.924 18:01:13 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:43.924 18:01:13 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:43.924 18:01:13 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:43.924 18:01:13 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:43.924 ************************************ 00:11:43.924 START TEST nvme_simple_copy 00:11:43.924 ************************************ 00:11:43.924 18:01:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:44.184 Initializing NVMe Controllers 00:11:44.184 Attaching to 0000:00:10.0 00:11:44.184 Controller supports SCC. Attached to 0000:00:10.0 00:11:44.184 Namespace ID: 1 size: 6GB 00:11:44.184 Initialization complete. 00:11:44.184 00:11:44.184 Controller QEMU NVMe Ctrl (12340 ) 00:11:44.184 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:44.184 Namespace Block Size:4096 00:11:44.184 Writing LBAs 0 to 63 with Random Data 00:11:44.184 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:44.184 LBAs matching Written Data: 64 00:11:44.184 00:11:44.184 real 0m0.315s 00:11:44.184 user 0m0.117s 00:11:44.184 sys 0m0.097s 00:11:44.184 18:01:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:44.184 18:01:13 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:44.184 ************************************ 00:11:44.184 END TEST nvme_simple_copy 00:11:44.184 ************************************ 00:11:44.443 00:11:44.443 real 0m9.066s 00:11:44.443 user 0m1.533s 00:11:44.443 sys 0m2.543s 00:11:44.443 18:01:13 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:44.443 18:01:13 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:44.443 ************************************ 00:11:44.443 END TEST nvme_scc 00:11:44.443 ************************************ 00:11:44.443 18:01:13 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:44.443 18:01:13 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:11:44.443 18:01:13 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:11:44.443 18:01:13 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:11:44.443 18:01:13 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:44.443 18:01:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:44.443 18:01:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:44.443 18:01:13 -- common/autotest_common.sh@10 -- # set +x 00:11:44.443 ************************************ 00:11:44.443 START TEST nvme_fdp 00:11:44.443 ************************************ 00:11:44.443 18:01:13 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:11:44.443 * Looking for test storage... 00:11:44.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:44.443 18:01:13 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:44.443 18:01:13 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:44.443 18:01:13 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:11:44.702 18:01:13 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:44.702 18:01:13 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.702 18:01:13 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.702 18:01:13 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.702 18:01:13 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:11:44.703 18:01:13 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.703 18:01:13 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:44.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.703 --rc genhtml_branch_coverage=1 00:11:44.703 --rc genhtml_function_coverage=1 00:11:44.703 --rc genhtml_legend=1 00:11:44.703 --rc geninfo_all_blocks=1 00:11:44.703 --rc geninfo_unexecuted_blocks=1 00:11:44.703 00:11:44.703 ' 00:11:44.703 18:01:13 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:44.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.703 --rc genhtml_branch_coverage=1 00:11:44.703 --rc genhtml_function_coverage=1 00:11:44.703 --rc genhtml_legend=1 00:11:44.703 --rc geninfo_all_blocks=1 00:11:44.703 --rc geninfo_unexecuted_blocks=1 00:11:44.703 00:11:44.703 ' 00:11:44.703 18:01:13 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:44.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.703 --rc genhtml_branch_coverage=1 00:11:44.703 --rc genhtml_function_coverage=1 00:11:44.703 --rc genhtml_legend=1 00:11:44.703 --rc geninfo_all_blocks=1 00:11:44.703 --rc geninfo_unexecuted_blocks=1 00:11:44.703 00:11:44.703 ' 00:11:44.703 18:01:13 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:44.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.703 --rc genhtml_branch_coverage=1 00:11:44.703 --rc genhtml_function_coverage=1 00:11:44.703 --rc genhtml_legend=1 00:11:44.703 --rc geninfo_all_blocks=1 00:11:44.703 --rc geninfo_unexecuted_blocks=1 00:11:44.703 00:11:44.703 ' 00:11:44.703 18:01:13 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:44.703 18:01:13 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:44.703 18:01:13 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:44.703 18:01:13 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:44.703 18:01:13 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.703 18:01:13 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.703 18:01:13 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.703 18:01:13 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.703 18:01:13 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.703 18:01:13 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:44.703 18:01:13 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.703 18:01:13 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:44.703 18:01:13 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:44.703 18:01:13 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:44.703 18:01:13 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:44.703 18:01:13 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:44.703 18:01:13 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:44.703 18:01:13 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:44.703 18:01:13 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:44.703 18:01:13 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:44.703 18:01:13 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:44.703 18:01:13 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:45.271 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:45.271 Waiting for block devices as requested 00:11:45.530 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:45.530 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:45.530 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:45.789 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:51.099 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:51.099 18:01:20 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:51.099 18:01:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:51.099 18:01:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:51.099 18:01:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:51.099 18:01:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.099 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:51.100 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.101 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.102 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:51.103 18:01:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:51.103 18:01:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:51.103 18:01:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:51.103 18:01:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:51.103 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.104 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.105 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:51.106 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:51.107 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:51.108 18:01:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:51.108 18:01:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:51.108 18:01:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:51.108 18:01:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:51.108 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.109 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.110 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.111 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:51.112 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.113 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.114 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.115 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:51.116 18:01:20 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:51.116 18:01:20 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:51.116 18:01:20 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:51.116 18:01:20 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:51.116 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:51.377 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:51.378 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:51.379 18:01:20 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:51.379 18:01:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:51.380 18:01:20 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:51.380 18:01:20 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:51.380 18:01:20 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:51.380 18:01:20 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:51.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:52.885 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:52.885 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:52.885 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:52.885 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:52.885 18:01:22 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:52.885 18:01:22 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:52.885 18:01:22 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:52.885 18:01:22 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:52.885 ************************************ 00:11:52.885 START TEST nvme_flexible_data_placement 00:11:52.885 ************************************ 00:11:52.885 18:01:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:53.144 Initializing NVMe Controllers 00:11:53.144 Attaching to 0000:00:13.0 00:11:53.144 Controller supports FDP Attached to 0000:00:13.0 00:11:53.144 Namespace ID: 1 Endurance Group ID: 1 00:11:53.144 Initialization complete. 00:11:53.144 00:11:53.144 ================================== 00:11:53.144 == FDP tests for Namespace: #01 == 00:11:53.144 ================================== 00:11:53.144 00:11:53.144 Get Feature: FDP: 00:11:53.144 ================= 00:11:53.144 Enabled: Yes 00:11:53.144 FDP configuration Index: 0 00:11:53.144 00:11:53.144 FDP configurations log page 00:11:53.144 =========================== 00:11:53.144 Number of FDP configurations: 1 00:11:53.144 Version: 0 00:11:53.144 Size: 112 00:11:53.144 FDP Configuration Descriptor: 0 00:11:53.144 Descriptor Size: 96 00:11:53.144 Reclaim Group Identifier format: 2 00:11:53.144 FDP Volatile Write Cache: Not Present 00:11:53.144 FDP Configuration: Valid 00:11:53.144 Vendor Specific Size: 0 00:11:53.144 Number of Reclaim Groups: 2 00:11:53.144 Number of Recalim Unit Handles: 8 00:11:53.144 Max Placement Identifiers: 128 00:11:53.144 Number of Namespaces Suppprted: 256 00:11:53.144 Reclaim unit Nominal Size: 6000000 bytes 00:11:53.144 Estimated Reclaim Unit Time Limit: Not Reported 00:11:53.144 RUH Desc #000: RUH Type: Initially Isolated 00:11:53.144 RUH Desc #001: RUH Type: Initially Isolated 00:11:53.144 RUH Desc #002: RUH Type: Initially Isolated 00:11:53.144 RUH Desc #003: RUH Type: Initially Isolated 00:11:53.144 RUH Desc #004: RUH Type: Initially Isolated 00:11:53.144 RUH Desc #005: RUH Type: Initially Isolated 00:11:53.144 RUH Desc #006: RUH Type: Initially Isolated 00:11:53.144 RUH Desc #007: RUH Type: Initially Isolated 00:11:53.144 00:11:53.144 FDP reclaim unit handle usage log page 00:11:53.144 ====================================== 00:11:53.144 Number of Reclaim Unit Handles: 8 00:11:53.144 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:53.144 RUH Usage Desc #001: RUH Attributes: Unused 00:11:53.144 RUH Usage Desc #002: RUH Attributes: Unused 00:11:53.144 RUH Usage Desc #003: RUH Attributes: Unused 00:11:53.144 RUH Usage Desc #004: RUH Attributes: Unused 00:11:53.144 RUH Usage Desc #005: RUH Attributes: Unused 00:11:53.144 RUH Usage Desc #006: RUH Attributes: Unused 00:11:53.144 RUH Usage Desc #007: RUH Attributes: Unused 00:11:53.144 00:11:53.144 FDP statistics log page 00:11:53.144 ======================= 00:11:53.144 Host bytes with metadata written: 1001750528 00:11:53.144 Media bytes with metadata written: 1001922560 00:11:53.144 Media bytes erased: 0 00:11:53.144 00:11:53.144 FDP Reclaim unit handle status 00:11:53.144 ============================== 00:11:53.144 Number of RUHS descriptors: 2 00:11:53.145 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000004a8 00:11:53.145 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:53.145 00:11:53.145 FDP write on placement id: 0 success 00:11:53.145 00:11:53.145 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:53.145 00:11:53.145 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:53.145 00:11:53.145 Get Feature: FDP Events for Placement handle: #0 00:11:53.145 ======================== 00:11:53.145 Number of FDP Events: 6 00:11:53.145 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:53.145 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:53.145 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:53.145 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:53.145 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:53.145 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:53.145 00:11:53.145 FDP events log page 00:11:53.145 =================== 00:11:53.145 Number of FDP events: 1 00:11:53.145 FDP Event #0: 00:11:53.145 Event Type: RU Not Written to Capacity 00:11:53.145 Placement Identifier: Valid 00:11:53.145 NSID: Valid 00:11:53.145 Location: Valid 00:11:53.145 Placement Identifier: 0 00:11:53.145 Event Timestamp: 7 00:11:53.145 Namespace Identifier: 1 00:11:53.145 Reclaim Group Identifier: 0 00:11:53.145 Reclaim Unit Handle Identifier: 0 00:11:53.145 00:11:53.145 FDP test passed 00:11:53.145 00:11:53.145 real 0m0.285s 00:11:53.145 user 0m0.088s 00:11:53.145 sys 0m0.096s 00:11:53.145 ************************************ 00:11:53.145 END TEST nvme_flexible_data_placement 00:11:53.145 ************************************ 00:11:53.145 18:01:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:53.145 18:01:22 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:53.404 ************************************ 00:11:53.404 END TEST nvme_fdp 00:11:53.404 ************************************ 00:11:53.404 00:11:53.404 real 0m8.937s 00:11:53.404 user 0m1.546s 00:11:53.404 sys 0m2.430s 00:11:53.404 18:01:22 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:53.404 18:01:22 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:53.404 18:01:22 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:53.404 18:01:22 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:53.404 18:01:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:53.404 18:01:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:53.404 18:01:22 -- common/autotest_common.sh@10 -- # set +x 00:11:53.404 ************************************ 00:11:53.404 START TEST nvme_rpc 00:11:53.404 ************************************ 00:11:53.404 18:01:22 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:53.404 * Looking for test storage... 00:11:53.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:53.664 18:01:22 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:53.664 18:01:22 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:11:53.664 18:01:22 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:53.664 18:01:22 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.664 18:01:22 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:53.664 18:01:22 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.664 18:01:22 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:53.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.664 --rc genhtml_branch_coverage=1 00:11:53.664 --rc genhtml_function_coverage=1 00:11:53.664 --rc genhtml_legend=1 00:11:53.664 --rc geninfo_all_blocks=1 00:11:53.664 --rc geninfo_unexecuted_blocks=1 00:11:53.664 00:11:53.664 ' 00:11:53.664 18:01:22 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:53.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.664 --rc genhtml_branch_coverage=1 00:11:53.664 --rc genhtml_function_coverage=1 00:11:53.664 --rc genhtml_legend=1 00:11:53.664 --rc geninfo_all_blocks=1 00:11:53.664 --rc geninfo_unexecuted_blocks=1 00:11:53.664 00:11:53.664 ' 00:11:53.664 18:01:22 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:53.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.664 --rc genhtml_branch_coverage=1 00:11:53.664 --rc genhtml_function_coverage=1 00:11:53.664 --rc genhtml_legend=1 00:11:53.664 --rc geninfo_all_blocks=1 00:11:53.664 --rc geninfo_unexecuted_blocks=1 00:11:53.664 00:11:53.664 ' 00:11:53.664 18:01:22 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:53.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.665 --rc genhtml_branch_coverage=1 00:11:53.665 --rc genhtml_function_coverage=1 00:11:53.665 --rc genhtml_legend=1 00:11:53.665 --rc geninfo_all_blocks=1 00:11:53.665 --rc geninfo_unexecuted_blocks=1 00:11:53.665 00:11:53.665 ' 00:11:53.665 18:01:22 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:53.665 18:01:22 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:11:53.665 18:01:22 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:53.665 18:01:22 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66911 00:11:53.665 18:01:22 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:53.665 18:01:22 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:53.665 18:01:22 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66911 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 66911 ']' 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:53.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:53.665 18:01:22 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:53.924 [2024-11-05 18:01:23.042448] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:11:53.924 [2024-11-05 18:01:23.042573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66911 ] 00:11:53.924 [2024-11-05 18:01:23.225228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:54.183 [2024-11-05 18:01:23.334727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.183 [2024-11-05 18:01:23.334761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.121 18:01:24 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:55.121 18:01:24 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:55.121 18:01:24 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:55.121 Nvme0n1 00:11:55.121 18:01:24 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:55.121 18:01:24 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:55.380 request: 00:11:55.380 { 00:11:55.380 "bdev_name": "Nvme0n1", 00:11:55.380 "filename": "non_existing_file", 00:11:55.380 "method": "bdev_nvme_apply_firmware", 00:11:55.380 "req_id": 1 00:11:55.380 } 00:11:55.380 Got JSON-RPC error response 00:11:55.380 response: 00:11:55.380 { 00:11:55.380 "code": -32603, 00:11:55.380 "message": "open file failed." 00:11:55.380 } 00:11:55.380 18:01:24 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:55.380 18:01:24 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:55.380 18:01:24 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:55.639 18:01:24 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:55.639 18:01:24 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66911 00:11:55.639 18:01:24 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 66911 ']' 00:11:55.639 18:01:24 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 66911 00:11:55.639 18:01:24 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:11:55.639 18:01:24 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:55.639 18:01:24 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66911 00:11:55.639 killing process with pid 66911 00:11:55.639 18:01:24 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:55.639 18:01:24 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:55.639 18:01:24 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66911' 00:11:55.639 18:01:24 nvme_rpc -- common/autotest_common.sh@971 -- # kill 66911 00:11:55.639 18:01:24 nvme_rpc -- common/autotest_common.sh@976 -- # wait 66911 00:11:58.181 00:11:58.181 real 0m4.506s 00:11:58.181 user 0m8.223s 00:11:58.181 sys 0m0.761s 00:11:58.181 18:01:27 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:58.181 ************************************ 00:11:58.181 END TEST nvme_rpc 00:11:58.181 ************************************ 00:11:58.181 18:01:27 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.181 18:01:27 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:58.181 18:01:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:58.181 18:01:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:58.181 18:01:27 -- common/autotest_common.sh@10 -- # set +x 00:11:58.181 ************************************ 00:11:58.181 START TEST nvme_rpc_timeouts 00:11:58.181 ************************************ 00:11:58.181 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:58.181 * Looking for test storage... 00:11:58.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:58.181 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:58.181 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:11:58.181 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:58.181 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.181 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.182 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:11:58.182 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:11:58.182 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.182 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:11:58.182 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.182 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:11:58.182 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:11:58.182 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.182 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:11:58.182 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.182 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.182 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.182 18:01:27 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:11:58.182 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.182 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:58.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.182 --rc genhtml_branch_coverage=1 00:11:58.182 --rc genhtml_function_coverage=1 00:11:58.182 --rc genhtml_legend=1 00:11:58.182 --rc geninfo_all_blocks=1 00:11:58.182 --rc geninfo_unexecuted_blocks=1 00:11:58.182 00:11:58.182 ' 00:11:58.182 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:58.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.182 --rc genhtml_branch_coverage=1 00:11:58.182 --rc genhtml_function_coverage=1 00:11:58.182 --rc genhtml_legend=1 00:11:58.182 --rc geninfo_all_blocks=1 00:11:58.182 --rc geninfo_unexecuted_blocks=1 00:11:58.182 00:11:58.182 ' 00:11:58.182 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:58.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.182 --rc genhtml_branch_coverage=1 00:11:58.182 --rc genhtml_function_coverage=1 00:11:58.182 --rc genhtml_legend=1 00:11:58.182 --rc geninfo_all_blocks=1 00:11:58.182 --rc geninfo_unexecuted_blocks=1 00:11:58.182 00:11:58.182 ' 00:11:58.182 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:58.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.182 --rc genhtml_branch_coverage=1 00:11:58.182 --rc genhtml_function_coverage=1 00:11:58.182 --rc genhtml_legend=1 00:11:58.182 --rc geninfo_all_blocks=1 00:11:58.182 --rc geninfo_unexecuted_blocks=1 00:11:58.182 00:11:58.182 ' 00:11:58.182 18:01:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:58.182 18:01:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66987 00:11:58.182 18:01:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66987 00:11:58.182 18:01:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67021 00:11:58.182 18:01:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:58.182 18:01:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:58.182 18:01:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67021 00:11:58.182 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 67021 ']' 00:11:58.182 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.182 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:58.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.182 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.182 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:58.182 18:01:27 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:58.442 [2024-11-05 18:01:27.510043] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:11:58.442 [2024-11-05 18:01:27.510168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67021 ] 00:11:58.442 [2024-11-05 18:01:27.688723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:58.701 [2024-11-05 18:01:27.797473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.701 [2024-11-05 18:01:27.797544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.640 18:01:28 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:59.640 18:01:28 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:11:59.640 Checking default timeout settings: 00:11:59.640 18:01:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:59.640 18:01:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:59.899 Making settings changes with rpc: 00:11:59.899 18:01:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:59.899 18:01:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:59.899 Check default vs. modified settings: 00:11:59.899 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:59.899 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66987 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66987 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:00.468 Setting action_on_timeout is changed as expected. 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66987 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66987 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:00.468 Setting timeout_us is changed as expected. 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66987 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66987 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:00.468 Setting timeout_admin_us is changed as expected. 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66987 /tmp/settings_modified_66987 00:12:00.468 18:01:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67021 00:12:00.468 18:01:29 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 67021 ']' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 67021 00:12:00.468 18:01:29 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:12:00.468 18:01:29 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67021 00:12:00.468 18:01:29 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:00.468 18:01:29 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:00.468 18:01:29 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67021' 00:12:00.468 killing process with pid 67021 00:12:00.468 18:01:29 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 67021 00:12:00.468 18:01:29 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 67021 00:12:03.005 RPC TIMEOUT SETTING TEST PASSED. 00:12:03.005 18:01:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:03.005 00:12:03.005 real 0m4.910s 00:12:03.005 user 0m9.242s 00:12:03.006 sys 0m0.797s 00:12:03.006 18:01:32 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:03.006 18:01:32 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:03.006 ************************************ 00:12:03.006 END TEST nvme_rpc_timeouts 00:12:03.006 ************************************ 00:12:03.006 18:01:32 -- spdk/autotest.sh@239 -- # uname -s 00:12:03.006 18:01:32 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:12:03.006 18:01:32 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:03.006 18:01:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:03.006 18:01:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:03.006 18:01:32 -- common/autotest_common.sh@10 -- # set +x 00:12:03.006 ************************************ 00:12:03.006 START TEST sw_hotplug 00:12:03.006 ************************************ 00:12:03.006 18:01:32 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:03.006 * Looking for test storage... 00:12:03.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:03.006 18:01:32 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:03.006 18:01:32 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:12:03.006 18:01:32 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:03.265 18:01:32 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:12:03.265 18:01:32 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:12:03.266 18:01:32 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.266 18:01:32 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:12:03.266 18:01:32 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.266 18:01:32 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.266 18:01:32 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.266 18:01:32 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:12:03.266 18:01:32 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.266 18:01:32 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:03.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.266 --rc genhtml_branch_coverage=1 00:12:03.266 --rc genhtml_function_coverage=1 00:12:03.266 --rc genhtml_legend=1 00:12:03.266 --rc geninfo_all_blocks=1 00:12:03.266 --rc geninfo_unexecuted_blocks=1 00:12:03.266 00:12:03.266 ' 00:12:03.266 18:01:32 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:03.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.266 --rc genhtml_branch_coverage=1 00:12:03.266 --rc genhtml_function_coverage=1 00:12:03.266 --rc genhtml_legend=1 00:12:03.266 --rc geninfo_all_blocks=1 00:12:03.266 --rc geninfo_unexecuted_blocks=1 00:12:03.266 00:12:03.266 ' 00:12:03.266 18:01:32 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:03.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.266 --rc genhtml_branch_coverage=1 00:12:03.266 --rc genhtml_function_coverage=1 00:12:03.266 --rc genhtml_legend=1 00:12:03.266 --rc geninfo_all_blocks=1 00:12:03.266 --rc geninfo_unexecuted_blocks=1 00:12:03.266 00:12:03.266 ' 00:12:03.266 18:01:32 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:03.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.266 --rc genhtml_branch_coverage=1 00:12:03.266 --rc genhtml_function_coverage=1 00:12:03.266 --rc genhtml_legend=1 00:12:03.266 --rc geninfo_all_blocks=1 00:12:03.266 --rc geninfo_unexecuted_blocks=1 00:12:03.266 00:12:03.266 ' 00:12:03.266 18:01:32 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:03.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:04.094 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:04.094 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:04.094 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:04.094 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:04.094 18:01:33 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:12:04.094 18:01:33 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:12:04.094 18:01:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:12:04.094 18:01:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@233 -- # local class 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:12:04.094 18:01:33 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:04.094 18:01:33 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:12:04.094 18:01:33 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:04.094 18:01:33 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:04.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:04.924 Waiting for block devices as requested 00:12:05.183 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.183 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.183 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.442 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:10.717 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:10.717 18:01:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:12:10.717 18:01:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:10.976 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:12:11.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:11.236 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:12:11.495 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:12:12.063 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.063 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.063 18:01:41 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:12:12.063 18:01:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:12.063 18:01:41 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:12:12.063 18:01:41 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:12.063 18:01:41 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67914 00:12:12.063 18:01:41 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:12.063 18:01:41 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:12:12.063 18:01:41 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:12.063 18:01:41 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:12:12.063 18:01:41 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:12.063 18:01:41 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:12.063 18:01:41 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:12.063 18:01:41 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:12.063 18:01:41 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:12:12.063 18:01:41 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:12.063 18:01:41 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:12.063 18:01:41 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:12:12.063 18:01:41 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:12.063 18:01:41 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:12.323 Initializing NVMe Controllers 00:12:12.323 Attaching to 0000:00:10.0 00:12:12.323 Attaching to 0000:00:11.0 00:12:12.323 Attached to 0000:00:11.0 00:12:12.323 Attached to 0000:00:10.0 00:12:12.323 Initialization complete. Starting I/O... 00:12:12.323 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:12:12.323 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:12.323 00:12:13.260 QEMU NVMe Ctrl (12341 ): 1612 I/Os completed (+1612) 00:12:13.260 QEMU NVMe Ctrl (12340 ): 1612 I/Os completed (+1612) 00:12:13.260 00:12:14.638 QEMU NVMe Ctrl (12341 ): 3792 I/Os completed (+2180) 00:12:14.638 QEMU NVMe Ctrl (12340 ): 3792 I/Os completed (+2180) 00:12:14.638 00:12:15.576 QEMU NVMe Ctrl (12341 ): 6036 I/Os completed (+2244) 00:12:15.576 QEMU NVMe Ctrl (12340 ): 6036 I/Os completed (+2244) 00:12:15.576 00:12:16.514 QEMU NVMe Ctrl (12341 ): 8280 I/Os completed (+2244) 00:12:16.514 QEMU NVMe Ctrl (12340 ): 8280 I/Os completed (+2244) 00:12:16.514 00:12:17.451 QEMU NVMe Ctrl (12341 ): 10512 I/Os completed (+2232) 00:12:17.451 QEMU NVMe Ctrl (12340 ): 10512 I/Os completed (+2232) 00:12:17.451 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:18.389 [2024-11-05 18:01:47.358439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:18.389 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:18.389 [2024-11-05 18:01:47.360265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 [2024-11-05 18:01:47.360459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 [2024-11-05 18:01:47.360489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 [2024-11-05 18:01:47.360514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:18.389 [2024-11-05 18:01:47.363106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 [2024-11-05 18:01:47.363158] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 [2024-11-05 18:01:47.363177] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 [2024-11-05 18:01:47.363195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:18.389 [2024-11-05 18:01:47.395773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:18.389 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:18.389 [2024-11-05 18:01:47.397381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 [2024-11-05 18:01:47.397473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 [2024-11-05 18:01:47.397516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 [2024-11-05 18:01:47.397535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:18.389 [2024-11-05 18:01:47.400085] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 [2024-11-05 18:01:47.400127] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 [2024-11-05 18:01:47.400148] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 [2024-11-05 18:01:47.400166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:18.389 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:18.389 Attaching to 0000:00:10.0 00:12:18.389 Attached to 0000:00:10.0 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:18.389 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:18.389 Attaching to 0000:00:11.0 00:12:18.389 Attached to 0000:00:11.0 00:12:19.327 QEMU NVMe Ctrl (12340 ): 2140 I/Os completed (+2140) 00:12:19.327 QEMU NVMe Ctrl (12341 ): 1913 I/Os completed (+1913) 00:12:19.327 00:12:20.264 QEMU NVMe Ctrl (12340 ): 4368 I/Os completed (+2228) 00:12:20.265 QEMU NVMe Ctrl (12341 ): 4141 I/Os completed (+2228) 00:12:20.265 00:12:21.659 QEMU NVMe Ctrl (12340 ): 6584 I/Os completed (+2216) 00:12:21.659 QEMU NVMe Ctrl (12341 ): 6357 I/Os completed (+2216) 00:12:21.659 00:12:22.595 QEMU NVMe Ctrl (12340 ): 8824 I/Os completed (+2240) 00:12:22.595 QEMU NVMe Ctrl (12341 ): 8597 I/Os completed (+2240) 00:12:22.595 00:12:23.533 QEMU NVMe Ctrl (12340 ): 11012 I/Os completed (+2188) 00:12:23.533 QEMU NVMe Ctrl (12341 ): 10786 I/Os completed (+2189) 00:12:23.533 00:12:24.470 QEMU NVMe Ctrl (12340 ): 13260 I/Os completed (+2248) 00:12:24.470 QEMU NVMe Ctrl (12341 ): 13034 I/Os completed (+2248) 00:12:24.470 00:12:25.407 QEMU NVMe Ctrl (12340 ): 15500 I/Os completed (+2240) 00:12:25.407 QEMU NVMe Ctrl (12341 ): 15274 I/Os completed (+2240) 00:12:25.407 00:12:26.344 QEMU NVMe Ctrl (12340 ): 17736 I/Os completed (+2236) 00:12:26.344 QEMU NVMe Ctrl (12341 ): 17510 I/Os completed (+2236) 00:12:26.344 00:12:27.292 QEMU NVMe Ctrl (12340 ): 19980 I/Os completed (+2244) 00:12:27.292 QEMU NVMe Ctrl (12341 ): 19754 I/Os completed (+2244) 00:12:27.292 00:12:28.670 QEMU NVMe Ctrl (12340 ): 22224 I/Os completed (+2244) 00:12:28.670 QEMU NVMe Ctrl (12341 ): 21998 I/Os completed (+2244) 00:12:28.670 00:12:29.237 QEMU NVMe Ctrl (12340 ): 24472 I/Os completed (+2248) 00:12:29.237 QEMU NVMe Ctrl (12341 ): 24246 I/Os completed (+2248) 00:12:29.237 00:12:30.615 QEMU NVMe Ctrl (12340 ): 26704 I/Os completed (+2232) 00:12:30.615 QEMU NVMe Ctrl (12341 ): 26478 I/Os completed (+2232) 00:12:30.615 00:12:30.615 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:30.615 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:30.615 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:30.615 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:30.615 [2024-11-05 18:01:59.716335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:30.615 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:30.615 [2024-11-05 18:01:59.718183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 [2024-11-05 18:01:59.718346] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 [2024-11-05 18:01:59.718399] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 [2024-11-05 18:01:59.718544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:30.615 [2024-11-05 18:01:59.721439] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 [2024-11-05 18:01:59.721498] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 [2024-11-05 18:01:59.721528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 [2024-11-05 18:01:59.721547] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:30.615 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:30.615 [2024-11-05 18:01:59.759494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:30.615 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:30.615 [2024-11-05 18:01:59.760995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 [2024-11-05 18:01:59.761042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 [2024-11-05 18:01:59.761071] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 [2024-11-05 18:01:59.761090] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:30.615 [2024-11-05 18:01:59.763620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 [2024-11-05 18:01:59.763666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 [2024-11-05 18:01:59.763687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 [2024-11-05 18:01:59.763706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:30.615 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:30.615 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:30.615 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:30.615 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:30.615 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:30.874 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:30.874 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:30.874 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:30.874 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:30.874 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:30.874 Attaching to 0000:00:10.0 00:12:30.874 Attached to 0000:00:10.0 00:12:30.874 18:02:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:30.874 18:02:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:30.874 18:02:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:30.874 Attaching to 0000:00:11.0 00:12:30.874 Attached to 0000:00:11.0 00:12:31.442 QEMU NVMe Ctrl (12340 ): 1260 I/Os completed (+1260) 00:12:31.442 QEMU NVMe Ctrl (12341 ): 996 I/Os completed (+996) 00:12:31.442 00:12:32.378 QEMU NVMe Ctrl (12340 ): 3496 I/Os completed (+2236) 00:12:32.378 QEMU NVMe Ctrl (12341 ): 3232 I/Os completed (+2236) 00:12:32.378 00:12:33.314 QEMU NVMe Ctrl (12340 ): 5744 I/Os completed (+2248) 00:12:33.314 QEMU NVMe Ctrl (12341 ): 5480 I/Os completed (+2248) 00:12:33.314 00:12:34.248 QEMU NVMe Ctrl (12340 ): 7976 I/Os completed (+2232) 00:12:34.248 QEMU NVMe Ctrl (12341 ): 7712 I/Os completed (+2232) 00:12:34.248 00:12:35.624 QEMU NVMe Ctrl (12340 ): 10128 I/Os completed (+2152) 00:12:35.624 QEMU NVMe Ctrl (12341 ): 9864 I/Os completed (+2152) 00:12:35.624 00:12:36.559 QEMU NVMe Ctrl (12340 ): 12348 I/Os completed (+2220) 00:12:36.559 QEMU NVMe Ctrl (12341 ): 12084 I/Os completed (+2220) 00:12:36.559 00:12:37.494 QEMU NVMe Ctrl (12340 ): 14592 I/Os completed (+2244) 00:12:37.494 QEMU NVMe Ctrl (12341 ): 14332 I/Os completed (+2248) 00:12:37.494 00:12:38.431 QEMU NVMe Ctrl (12340 ): 16816 I/Os completed (+2224) 00:12:38.431 QEMU NVMe Ctrl (12341 ): 16558 I/Os completed (+2226) 00:12:38.431 00:12:39.368 QEMU NVMe Ctrl (12340 ): 19032 I/Os completed (+2216) 00:12:39.368 QEMU NVMe Ctrl (12341 ): 18774 I/Os completed (+2216) 00:12:39.368 00:12:40.306 QEMU NVMe Ctrl (12340 ): 21260 I/Os completed (+2228) 00:12:40.306 QEMU NVMe Ctrl (12341 ): 21002 I/Os completed (+2228) 00:12:40.306 00:12:41.240 QEMU NVMe Ctrl (12340 ): 23472 I/Os completed (+2212) 00:12:41.240 QEMU NVMe Ctrl (12341 ): 23214 I/Os completed (+2212) 00:12:41.240 00:12:42.618 QEMU NVMe Ctrl (12340 ): 25704 I/Os completed (+2232) 00:12:42.618 QEMU NVMe Ctrl (12341 ): 25446 I/Os completed (+2232) 00:12:42.618 00:12:42.877 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:42.877 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:42.877 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:42.877 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:42.877 [2024-11-05 18:02:12.108509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:42.877 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:42.877 [2024-11-05 18:02:12.110329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 [2024-11-05 18:02:12.110430] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 [2024-11-05 18:02:12.110477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 [2024-11-05 18:02:12.110523] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:42.877 [2024-11-05 18:02:12.113559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 [2024-11-05 18:02:12.113700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 [2024-11-05 18:02:12.113747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 [2024-11-05 18:02:12.113838] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:42.877 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:42.877 [2024-11-05 18:02:12.146282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:42.877 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:42.877 [2024-11-05 18:02:12.147954] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 [2024-11-05 18:02:12.148087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 [2024-11-05 18:02:12.148143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 [2024-11-05 18:02:12.148236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:42.877 [2024-11-05 18:02:12.150859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 [2024-11-05 18:02:12.150981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 [2024-11-05 18:02:12.151036] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 [2024-11-05 18:02:12.151126] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:42.877 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:42.877 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:43.137 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:43.137 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:43.137 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:43.137 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:43.137 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:43.137 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:43.137 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:43.137 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:43.137 Attaching to 0000:00:10.0 00:12:43.137 Attached to 0000:00:10.0 00:12:43.137 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:43.396 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:43.396 18:02:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:43.396 Attaching to 0000:00:11.0 00:12:43.396 Attached to 0000:00:11.0 00:12:43.396 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:43.396 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:43.396 [2024-11-05 18:02:12.481721] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:55.612 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:55.612 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:55.612 18:02:24 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.12 00:12:55.612 18:02:24 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.12 00:12:55.612 18:02:24 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:55.612 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.12 00:12:55.612 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.12 2 00:12:55.612 remove_attach_helper took 43.12s to complete (handling 2 nvme drive(s)) 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:13:02.183 18:02:30 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67914 00:13:02.183 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67914) - No such process 00:13:02.183 18:02:30 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67914 00:13:02.183 18:02:30 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:13:02.183 18:02:30 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:13:02.183 18:02:30 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:13:02.183 18:02:30 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68454 00:13:02.183 18:02:30 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:02.184 18:02:30 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:02.184 18:02:30 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68454 00:13:02.184 18:02:30 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 68454 ']' 00:13:02.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.184 18:02:30 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.184 18:02:30 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:02.184 18:02:30 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.184 18:02:30 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:02.184 18:02:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.184 [2024-11-05 18:02:30.614852] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:13:02.184 [2024-11-05 18:02:30.615537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68454 ] 00:13:02.184 [2024-11-05 18:02:30.804344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.184 [2024-11-05 18:02:30.923067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.755 18:02:31 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:02.755 18:02:31 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:13:02.755 18:02:31 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:02.755 18:02:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.755 18:02:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.755 18:02:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.755 18:02:31 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:13:02.755 18:02:31 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:02.755 18:02:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:02.755 18:02:31 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:13:02.755 18:02:31 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:13:02.755 18:02:31 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:13:02.755 18:02:31 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:13:02.755 18:02:31 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:13:02.755 18:02:31 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:02.755 18:02:31 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:02.755 18:02:31 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:02.755 18:02:31 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:02.755 18:02:31 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:09.331 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:09.331 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:09.331 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:09.331 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:09.331 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:09.331 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:09.331 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:09.331 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:09.331 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:09.331 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:09.331 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:09.331 18:02:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.331 18:02:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:09.331 [2024-11-05 18:02:37.908489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:09.331 [2024-11-05 18:02:37.911087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.331 [2024-11-05 18:02:37.911245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.331 [2024-11-05 18:02:37.911430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.331 [2024-11-05 18:02:37.911551] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.331 [2024-11-05 18:02:37.911594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.331 [2024-11-05 18:02:37.911700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.331 [2024-11-05 18:02:37.911808] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.331 [2024-11-05 18:02:37.911851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.331 [2024-11-05 18:02:37.911948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.331 [2024-11-05 18:02:37.912132] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.331 [2024-11-05 18:02:37.912148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.331 [2024-11-05 18:02:37.912163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.331 18:02:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.331 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:09.331 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:09.331 [2024-11-05 18:02:38.307827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:09.331 [2024-11-05 18:02:38.310384] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.331 [2024-11-05 18:02:38.310438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.331 [2024-11-05 18:02:38.310459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.331 [2024-11-05 18:02:38.310482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.331 [2024-11-05 18:02:38.310497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.331 [2024-11-05 18:02:38.310509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.331 [2024-11-05 18:02:38.310525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.331 [2024-11-05 18:02:38.310536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.331 [2024-11-05 18:02:38.310551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.331 [2024-11-05 18:02:38.310564] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.331 [2024-11-05 18:02:38.310578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.331 [2024-11-05 18:02:38.310591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.331 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:09.331 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:09.331 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:09.331 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:09.331 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:09.331 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:09.331 18:02:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.331 18:02:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:09.331 18:02:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.331 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:09.331 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:09.331 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:09.331 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:09.331 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:09.590 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:09.590 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:09.590 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:09.590 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:09.590 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:09.590 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:09.590 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:09.590 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:21.808 18:02:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.808 18:02:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:21.808 18:02:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:21.808 18:02:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.808 18:02:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:21.808 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:21.808 [2024-11-05 18:02:50.987426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:21.808 [2024-11-05 18:02:50.989911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:21.808 [2024-11-05 18:02:50.990059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.808 [2024-11-05 18:02:50.990163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.808 [2024-11-05 18:02:50.990275] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:21.808 [2024-11-05 18:02:50.990435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.808 [2024-11-05 18:02:50.990553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.808 [2024-11-05 18:02:50.990618] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:21.808 [2024-11-05 18:02:50.990663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.808 [2024-11-05 18:02:50.990759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.808 [2024-11-05 18:02:50.990865] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:21.808 [2024-11-05 18:02:50.990902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:21.808 [2024-11-05 18:02:50.991061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:21.808 18:02:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.808 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:21.808 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:22.067 [2024-11-05 18:02:51.386759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:22.067 [2024-11-05 18:02:51.389091] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.067 [2024-11-05 18:02:51.389223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.067 [2024-11-05 18:02:51.389422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.067 [2024-11-05 18:02:51.389659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.067 [2024-11-05 18:02:51.389700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.068 [2024-11-05 18:02:51.389750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.068 [2024-11-05 18:02:51.389853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.068 [2024-11-05 18:02:51.389890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.068 [2024-11-05 18:02:51.389942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.068 [2024-11-05 18:02:51.389992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:22.068 [2024-11-05 18:02:51.390101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:22.068 [2024-11-05 18:02:51.390151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:22.326 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:22.326 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:22.326 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:22.326 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:22.326 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:22.327 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:22.327 18:02:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.327 18:02:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:22.327 18:02:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.327 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:22.327 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:22.585 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:22.585 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:22.585 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:22.585 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:22.585 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:22.585 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:22.585 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:22.585 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:22.585 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:22.585 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:22.585 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:34.796 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:34.796 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:34.796 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:34.796 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:34.796 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:34.796 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:34.796 18:03:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.796 18:03:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:34.796 18:03:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.796 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:34.796 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:34.796 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:34.796 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:34.796 [2024-11-05 18:03:03.966571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:34.796 [2024-11-05 18:03:03.969437] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.796 [2024-11-05 18:03:03.969591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.796 [2024-11-05 18:03:03.969713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.796 [2024-11-05 18:03:03.969803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.796 [2024-11-05 18:03:03.969916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.796 [2024-11-05 18:03:03.969988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.796 [2024-11-05 18:03:03.970134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.796 [2024-11-05 18:03:03.970249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.796 [2024-11-05 18:03:03.970346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.796 [2024-11-05 18:03:03.970419] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:34.796 [2024-11-05 18:03:03.970556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.796 [2024-11-05 18:03:03.970631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.796 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:34.796 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:34.796 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:34.796 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:34.796 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:34.796 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:34.797 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:34.797 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:34.797 18:03:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.797 18:03:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:34.797 18:03:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.797 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:34.797 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:35.056 [2024-11-05 18:03:04.365913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:35.056 [2024-11-05 18:03:04.368263] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.056 [2024-11-05 18:03:04.368303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.056 [2024-11-05 18:03:04.368323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.056 [2024-11-05 18:03:04.368346] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.056 [2024-11-05 18:03:04.368360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.056 [2024-11-05 18:03:04.368372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.056 [2024-11-05 18:03:04.368388] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.056 [2024-11-05 18:03:04.368399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.056 [2024-11-05 18:03:04.368427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.056 [2024-11-05 18:03:04.368441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:35.056 [2024-11-05 18:03:04.368454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:35.056 [2024-11-05 18:03:04.368466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:35.316 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:35.316 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:35.316 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:35.316 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:35.316 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:35.316 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:35.316 18:03:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.316 18:03:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.316 18:03:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.316 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:35.316 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:35.575 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:35.575 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:35.575 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:35.575 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:35.575 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:35.575 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:35.575 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:35.575 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:35.834 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:35.834 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:35.834 18:03:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:48.047 18:03:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:48.047 18:03:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:48.047 18:03:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:48.047 18:03:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.047 18:03:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.048 18:03:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.048 18:03:16 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.048 18:03:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.048 18:03:16 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.048 18:03:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:48.048 18:03:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:48.048 18:03:17 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.17 00:13:48.048 18:03:17 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.17 00:13:48.048 18:03:17 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:48.048 18:03:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.17 00:13:48.048 18:03:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.17 2 00:13:48.048 remove_attach_helper took 45.17s to complete (handling 2 nvme drive(s)) 18:03:17 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:48.048 18:03:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.048 18:03:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.048 18:03:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.048 18:03:17 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:48.048 18:03:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.048 18:03:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.048 18:03:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.048 18:03:17 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:48.048 18:03:17 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:48.048 18:03:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:48.048 18:03:17 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:13:48.048 18:03:17 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:13:48.048 18:03:17 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:13:48.048 18:03:17 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:13:48.048 18:03:17 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:13:48.048 18:03:17 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:48.048 18:03:17 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:48.048 18:03:17 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:48.048 18:03:17 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:48.048 18:03:17 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:54.617 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:54.617 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:54.617 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:54.617 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:54.618 18:03:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.618 18:03:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:54.618 [2024-11-05 18:03:23.107873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:54.618 [2024-11-05 18:03:23.109489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.618 [2024-11-05 18:03:23.109533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.618 [2024-11-05 18:03:23.109559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.618 [2024-11-05 18:03:23.109586] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.618 [2024-11-05 18:03:23.109598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.618 [2024-11-05 18:03:23.109613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.618 [2024-11-05 18:03:23.109627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.618 [2024-11-05 18:03:23.109641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.618 [2024-11-05 18:03:23.109653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.618 [2024-11-05 18:03:23.109669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.618 [2024-11-05 18:03:23.109681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.618 [2024-11-05 18:03:23.109699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.618 18:03:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:54.618 [2024-11-05 18:03:23.507231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:54.618 [2024-11-05 18:03:23.509529] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.618 [2024-11-05 18:03:23.509592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.618 [2024-11-05 18:03:23.509612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.618 [2024-11-05 18:03:23.509636] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.618 [2024-11-05 18:03:23.509651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.618 [2024-11-05 18:03:23.509664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.618 [2024-11-05 18:03:23.509680] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.618 [2024-11-05 18:03:23.509692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.618 [2024-11-05 18:03:23.509707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.618 [2024-11-05 18:03:23.509721] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:54.618 [2024-11-05 18:03:23.509736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:54.618 [2024-11-05 18:03:23.509749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:54.618 18:03:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.618 18:03:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:54.618 18:03:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:54.618 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:54.877 18:03:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:54.877 18:03:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:54.877 18:03:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:07.089 18:03:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.089 18:03:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:07.089 18:03:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:07.089 18:03:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.089 18:03:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:07.089 [2024-11-05 18:03:36.186823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:07.089 [2024-11-05 18:03:36.188620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.089 [2024-11-05 18:03:36.188672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.089 [2024-11-05 18:03:36.188689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.089 [2024-11-05 18:03:36.188716] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.089 [2024-11-05 18:03:36.188729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.089 [2024-11-05 18:03:36.188745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.089 [2024-11-05 18:03:36.188759] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.089 [2024-11-05 18:03:36.188773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.089 [2024-11-05 18:03:36.188785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.089 [2024-11-05 18:03:36.188801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.089 [2024-11-05 18:03:36.188813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.089 [2024-11-05 18:03:36.188828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.089 18:03:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:07.089 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:07.349 [2024-11-05 18:03:36.586179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:07.349 [2024-11-05 18:03:36.587776] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.349 [2024-11-05 18:03:36.587815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.349 [2024-11-05 18:03:36.587834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.349 [2024-11-05 18:03:36.587856] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.349 [2024-11-05 18:03:36.587874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.349 [2024-11-05 18:03:36.587886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.349 [2024-11-05 18:03:36.587902] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.349 [2024-11-05 18:03:36.587914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.349 [2024-11-05 18:03:36.587929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.349 [2024-11-05 18:03:36.587941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:07.349 [2024-11-05 18:03:36.587955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:07.349 [2024-11-05 18:03:36.587967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:07.609 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:07.609 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:07.609 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:07.609 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:07.609 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:07.609 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:07.609 18:03:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.609 18:03:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:07.609 18:03:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.609 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:07.609 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:07.609 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:07.609 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:07.609 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:07.868 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:07.868 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:07.868 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:07.868 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:07.868 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:07.868 18:03:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:07.868 18:03:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:07.868 18:03:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:20.134 18:03:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.134 18:03:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:20.134 18:03:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:20.134 [2024-11-05 18:03:49.165934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:20.134 [2024-11-05 18:03:49.168634] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.134 [2024-11-05 18:03:49.168792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.134 [2024-11-05 18:03:49.168901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.134 [2024-11-05 18:03:49.168969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.134 [2024-11-05 18:03:49.169003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.134 [2024-11-05 18:03:49.169107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.134 [2024-11-05 18:03:49.169165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.134 [2024-11-05 18:03:49.169203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.134 [2024-11-05 18:03:49.169253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.134 [2024-11-05 18:03:49.169359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.134 [2024-11-05 18:03:49.169393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.134 [2024-11-05 18:03:49.169466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:20.134 18:03:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.134 18:03:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:20.134 18:03:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:20.134 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:20.394 [2024-11-05 18:03:49.565294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:20.394 [2024-11-05 18:03:49.567748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.394 [2024-11-05 18:03:49.567882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.394 [2024-11-05 18:03:49.567924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.394 [2024-11-05 18:03:49.567943] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.394 [2024-11-05 18:03:49.567958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.394 [2024-11-05 18:03:49.567971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.394 [2024-11-05 18:03:49.567988] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.394 [2024-11-05 18:03:49.568000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.394 [2024-11-05 18:03:49.568014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.394 [2024-11-05 18:03:49.568027] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:20.394 [2024-11-05 18:03:49.568044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:20.394 [2024-11-05 18:03:49.568056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:20.653 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:20.653 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:20.653 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:20.653 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:20.653 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:20.653 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:20.653 18:03:49 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.653 18:03:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:20.653 18:03:49 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.653 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:20.653 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:20.653 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:20.653 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:20.653 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:20.913 18:03:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:20.913 18:03:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:20.913 18:03:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:20.913 18:03:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:20.913 18:03:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:20.913 18:03:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:20.913 18:03:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:20.913 18:03:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:33.125 18:04:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:33.125 18:04:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:33.125 18:04:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:33.125 18:04:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:33.125 18:04:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:33.125 18:04:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.125 18:04:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:33.125 18:04:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.14 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.14 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:14:33.125 18:04:02 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.14 00:14:33.125 18:04:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.14 2 00:14:33.125 remove_attach_helper took 45.14s to complete (handling 2 nvme drive(s)) 18:04:02 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:33.125 18:04:02 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68454 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 68454 ']' 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 68454 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68454 00:14:33.125 killing process with pid 68454 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68454' 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@971 -- # kill 68454 00:14:33.125 18:04:02 sw_hotplug -- common/autotest_common.sh@976 -- # wait 68454 00:14:35.660 18:04:04 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:35.920 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:36.488 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:36.488 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:36.488 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:36.488 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:36.747 00:14:36.747 real 2m33.666s 00:14:36.747 user 1m51.082s 00:14:36.747 sys 0m22.757s 00:14:36.747 ************************************ 00:14:36.747 18:04:05 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:36.747 18:04:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:36.747 END TEST sw_hotplug 00:14:36.747 ************************************ 00:14:36.747 18:04:05 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:14:36.747 18:04:05 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:36.747 18:04:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:36.747 18:04:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:36.747 18:04:05 -- common/autotest_common.sh@10 -- # set +x 00:14:36.747 ************************************ 00:14:36.747 START TEST nvme_xnvme 00:14:36.747 ************************************ 00:14:36.747 18:04:05 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:36.747 * Looking for test storage... 00:14:36.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:36.747 18:04:06 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:36.747 18:04:06 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:14:36.747 18:04:06 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:37.007 18:04:06 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:37.007 18:04:06 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:37.007 18:04:06 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:37.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.007 --rc genhtml_branch_coverage=1 00:14:37.007 --rc genhtml_function_coverage=1 00:14:37.007 --rc genhtml_legend=1 00:14:37.007 --rc geninfo_all_blocks=1 00:14:37.007 --rc geninfo_unexecuted_blocks=1 00:14:37.007 00:14:37.007 ' 00:14:37.007 18:04:06 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:37.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.007 --rc genhtml_branch_coverage=1 00:14:37.007 --rc genhtml_function_coverage=1 00:14:37.007 --rc genhtml_legend=1 00:14:37.007 --rc geninfo_all_blocks=1 00:14:37.007 --rc geninfo_unexecuted_blocks=1 00:14:37.007 00:14:37.007 ' 00:14:37.007 18:04:06 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:37.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.007 --rc genhtml_branch_coverage=1 00:14:37.007 --rc genhtml_function_coverage=1 00:14:37.007 --rc genhtml_legend=1 00:14:37.007 --rc geninfo_all_blocks=1 00:14:37.007 --rc geninfo_unexecuted_blocks=1 00:14:37.007 00:14:37.007 ' 00:14:37.007 18:04:06 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:37.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:37.007 --rc genhtml_branch_coverage=1 00:14:37.007 --rc genhtml_function_coverage=1 00:14:37.007 --rc genhtml_legend=1 00:14:37.007 --rc geninfo_all_blocks=1 00:14:37.007 --rc geninfo_unexecuted_blocks=1 00:14:37.007 00:14:37.007 ' 00:14:37.007 18:04:06 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.007 18:04:06 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.007 18:04:06 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.007 18:04:06 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.007 18:04:06 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.007 18:04:06 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:37.007 18:04:06 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.007 18:04:06 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:14:37.007 18:04:06 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:37.007 18:04:06 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:37.007 18:04:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:37.007 ************************************ 00:14:37.007 START TEST xnvme_to_malloc_dd_copy 00:14:37.007 ************************************ 00:14:37.007 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:14:37.007 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:14:37.007 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:37.007 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:37.007 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:14:37.007 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:14:37.007 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:14:37.007 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:37.007 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:14:37.007 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:14:37.007 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:14:37.007 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:14:37.007 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:14:37.007 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:14:37.008 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:14:37.008 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:14:37.008 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:14:37.008 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:37.008 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:37.008 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:37.008 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:37.008 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:37.008 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:37.008 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:37.008 18:04:06 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:37.008 { 00:14:37.008 "subsystems": [ 00:14:37.008 { 00:14:37.008 "subsystem": "bdev", 00:14:37.008 "config": [ 00:14:37.008 { 00:14:37.008 "params": { 00:14:37.008 "block_size": 512, 00:14:37.008 "num_blocks": 2097152, 00:14:37.008 "name": "malloc0" 00:14:37.008 }, 00:14:37.008 "method": "bdev_malloc_create" 00:14:37.008 }, 00:14:37.008 { 00:14:37.008 "params": { 00:14:37.008 "io_mechanism": "libaio", 00:14:37.008 "filename": "/dev/nullb0", 00:14:37.008 "name": "null0" 00:14:37.008 }, 00:14:37.008 "method": "bdev_xnvme_create" 00:14:37.008 }, 00:14:37.008 { 00:14:37.008 "method": "bdev_wait_for_examine" 00:14:37.008 } 00:14:37.008 ] 00:14:37.008 } 00:14:37.008 ] 00:14:37.008 } 00:14:37.008 [2024-11-05 18:04:06.280004] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:14:37.008 [2024-11-05 18:04:06.280116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69825 ] 00:14:37.267 [2024-11-05 18:04:06.459496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.267 [2024-11-05 18:04:06.564580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.804  [2024-11-05T18:04:10.066Z] Copying: 273/1024 [MB] (273 MBps) [2024-11-05T18:04:11.005Z] Copying: 550/1024 [MB] (276 MBps) [2024-11-05T18:04:11.958Z] Copying: 825/1024 [MB] (275 MBps) [2024-11-05T18:04:16.151Z] Copying: 1024/1024 [MB] (average 274 MBps) 00:14:46.828 00:14:46.828 18:04:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:46.828 18:04:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:46.828 18:04:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:46.828 18:04:15 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:46.828 { 00:14:46.828 "subsystems": [ 00:14:46.828 { 00:14:46.828 "subsystem": "bdev", 00:14:46.828 "config": [ 00:14:46.828 { 00:14:46.828 "params": { 00:14:46.828 "block_size": 512, 00:14:46.828 "num_blocks": 2097152, 00:14:46.828 "name": "malloc0" 00:14:46.828 }, 00:14:46.828 "method": "bdev_malloc_create" 00:14:46.828 }, 00:14:46.828 { 00:14:46.828 "params": { 00:14:46.828 "io_mechanism": "libaio", 00:14:46.828 "filename": "/dev/nullb0", 00:14:46.828 "name": "null0" 00:14:46.828 }, 00:14:46.828 "method": "bdev_xnvme_create" 00:14:46.828 }, 00:14:46.828 { 00:14:46.828 "method": "bdev_wait_for_examine" 00:14:46.828 } 00:14:46.828 ] 00:14:46.828 } 00:14:46.828 ] 00:14:46.828 } 00:14:46.828 [2024-11-05 18:04:15.566862] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:14:46.828 [2024-11-05 18:04:15.566999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69934 ] 00:14:46.828 [2024-11-05 18:04:15.746445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.828 [2024-11-05 18:04:15.844019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.366  [2024-11-05T18:04:19.260Z] Copying: 273/1024 [MB] (273 MBps) [2024-11-05T18:04:20.200Z] Copying: 549/1024 [MB] (275 MBps) [2024-11-05T18:04:21.137Z] Copying: 823/1024 [MB] (273 MBps) [2024-11-05T18:04:25.331Z] Copying: 1024/1024 [MB] (average 275 MBps) 00:14:56.008 00:14:56.008 18:04:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:14:56.008 18:04:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:56.008 18:04:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:14:56.008 18:04:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:14:56.008 18:04:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:56.008 18:04:24 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:56.008 { 00:14:56.008 "subsystems": [ 00:14:56.008 { 00:14:56.008 "subsystem": "bdev", 00:14:56.008 "config": [ 00:14:56.008 { 00:14:56.008 "params": { 00:14:56.008 "block_size": 512, 00:14:56.008 "num_blocks": 2097152, 00:14:56.008 "name": "malloc0" 00:14:56.008 }, 00:14:56.008 "method": "bdev_malloc_create" 00:14:56.008 }, 00:14:56.008 { 00:14:56.008 "params": { 00:14:56.008 "io_mechanism": "io_uring", 00:14:56.008 "filename": "/dev/nullb0", 00:14:56.008 "name": "null0" 00:14:56.008 }, 00:14:56.008 "method": "bdev_xnvme_create" 00:14:56.008 }, 00:14:56.008 { 00:14:56.008 "method": "bdev_wait_for_examine" 00:14:56.008 } 00:14:56.008 ] 00:14:56.008 } 00:14:56.008 ] 00:14:56.008 } 00:14:56.008 [2024-11-05 18:04:24.754460] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:14:56.008 [2024-11-05 18:04:24.754575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70038 ] 00:14:56.008 [2024-11-05 18:04:24.936045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.008 [2024-11-05 18:04:25.043337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.592  [2024-11-05T18:04:28.483Z] Copying: 285/1024 [MB] (285 MBps) [2024-11-05T18:04:29.420Z] Copying: 575/1024 [MB] (289 MBps) [2024-11-05T18:04:29.987Z] Copying: 865/1024 [MB] (290 MBps) [2024-11-05T18:04:34.177Z] Copying: 1024/1024 [MB] (average 289 MBps) 00:15:04.854 00:15:04.854 18:04:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:15:04.854 18:04:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:15:04.854 18:04:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:15:04.854 18:04:33 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:04.854 { 00:15:04.854 "subsystems": [ 00:15:04.854 { 00:15:04.854 "subsystem": "bdev", 00:15:04.854 "config": [ 00:15:04.854 { 00:15:04.854 "params": { 00:15:04.854 "block_size": 512, 00:15:04.854 "num_blocks": 2097152, 00:15:04.854 "name": "malloc0" 00:15:04.854 }, 00:15:04.854 "method": "bdev_malloc_create" 00:15:04.854 }, 00:15:04.854 { 00:15:04.854 "params": { 00:15:04.854 "io_mechanism": "io_uring", 00:15:04.854 "filename": "/dev/nullb0", 00:15:04.854 "name": "null0" 00:15:04.854 }, 00:15:04.854 "method": "bdev_xnvme_create" 00:15:04.854 }, 00:15:04.854 { 00:15:04.854 "method": "bdev_wait_for_examine" 00:15:04.854 } 00:15:04.854 ] 00:15:04.854 } 00:15:04.854 ] 00:15:04.854 } 00:15:04.854 [2024-11-05 18:04:33.762446] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:15:04.854 [2024-11-05 18:04:33.762574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70142 ] 00:15:04.854 [2024-11-05 18:04:33.942338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.854 [2024-11-05 18:04:34.045479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.409  [2024-11-05T18:04:37.669Z] Copying: 293/1024 [MB] (293 MBps) [2024-11-05T18:04:38.608Z] Copying: 587/1024 [MB] (294 MBps) [2024-11-05T18:04:39.176Z] Copying: 881/1024 [MB] (293 MBps) [2024-11-05T18:04:43.375Z] Copying: 1024/1024 [MB] (average 293 MBps) 00:15:14.052 00:15:14.052 18:04:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:15:14.052 18:04:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:15:14.052 00:15:14.052 real 0m36.494s 00:15:14.052 user 0m31.872s 00:15:14.052 sys 0m4.154s 00:15:14.052 18:04:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:14.052 ************************************ 00:15:14.052 END TEST xnvme_to_malloc_dd_copy 00:15:14.052 ************************************ 00:15:14.052 18:04:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:15:14.052 18:04:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:14.052 18:04:42 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:14.052 18:04:42 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:14.052 18:04:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.052 ************************************ 00:15:14.052 START TEST xnvme_bdevperf 00:15:14.052 ************************************ 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:14.052 18:04:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:14.052 { 00:15:14.052 "subsystems": [ 00:15:14.052 { 00:15:14.052 "subsystem": "bdev", 00:15:14.052 "config": [ 00:15:14.052 { 00:15:14.052 "params": { 00:15:14.052 "io_mechanism": "libaio", 00:15:14.052 "filename": "/dev/nullb0", 00:15:14.052 "name": "null0" 00:15:14.052 }, 00:15:14.052 "method": "bdev_xnvme_create" 00:15:14.052 }, 00:15:14.052 { 00:15:14.052 "method": "bdev_wait_for_examine" 00:15:14.052 } 00:15:14.052 ] 00:15:14.052 } 00:15:14.052 ] 00:15:14.052 } 00:15:14.052 [2024-11-05 18:04:42.867723] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:15:14.052 [2024-11-05 18:04:42.867851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70271 ] 00:15:14.052 [2024-11-05 18:04:43.039848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.052 [2024-11-05 18:04:43.148678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.311 Running I/O for 5 seconds... 00:15:16.190 164032.00 IOPS, 640.75 MiB/s [2024-11-05T18:04:46.891Z] 164800.00 IOPS, 643.75 MiB/s [2024-11-05T18:04:47.830Z] 165226.67 IOPS, 645.42 MiB/s [2024-11-05T18:04:48.767Z] 165440.00 IOPS, 646.25 MiB/s [2024-11-05T18:04:48.767Z] 165606.40 IOPS, 646.90 MiB/s 00:15:19.444 Latency(us) 00:15:19.444 [2024-11-05T18:04:48.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.444 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:19.444 null0 : 5.00 165562.17 646.73 0.00 0.00 384.28 129.13 1723.94 00:15:19.444 [2024-11-05T18:04:48.767Z] =================================================================================================================== 00:15:19.444 [2024-11-05T18:04:48.767Z] Total : 165562.17 646.73 0.00 0.00 384.28 129.13 1723.94 00:15:20.420 18:04:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:15:20.420 18:04:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:20.421 18:04:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:15:20.421 18:04:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:15:20.421 18:04:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:20.421 18:04:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:20.421 { 00:15:20.421 "subsystems": [ 00:15:20.421 { 00:15:20.421 "subsystem": "bdev", 00:15:20.421 "config": [ 00:15:20.421 { 00:15:20.421 "params": { 00:15:20.421 "io_mechanism": "io_uring", 00:15:20.421 "filename": "/dev/nullb0", 00:15:20.421 "name": "null0" 00:15:20.421 }, 00:15:20.421 "method": "bdev_xnvme_create" 00:15:20.421 }, 00:15:20.421 { 00:15:20.421 "method": "bdev_wait_for_examine" 00:15:20.421 } 00:15:20.421 ] 00:15:20.421 } 00:15:20.421 ] 00:15:20.421 } 00:15:20.421 [2024-11-05 18:04:49.622375] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:15:20.421 [2024-11-05 18:04:49.622519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70346 ] 00:15:20.681 [2024-11-05 18:04:49.803300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.681 [2024-11-05 18:04:49.904485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.940 Running I/O for 5 seconds... 00:15:23.256 213824.00 IOPS, 835.25 MiB/s [2024-11-05T18:04:53.516Z] 213920.00 IOPS, 835.62 MiB/s [2024-11-05T18:04:54.454Z] 214293.33 IOPS, 837.08 MiB/s [2024-11-05T18:04:55.390Z] 214160.00 IOPS, 836.56 MiB/s 00:15:26.067 Latency(us) 00:15:26.067 [2024-11-05T18:04:55.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.067 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:26.067 null0 : 5.00 214461.98 837.74 0.00 0.00 296.10 188.35 1579.18 00:15:26.067 [2024-11-05T18:04:55.390Z] =================================================================================================================== 00:15:26.067 [2024-11-05T18:04:55.390Z] Total : 214461.98 837.74 0.00 0.00 296.10 188.35 1579.18 00:15:27.074 18:04:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:15:27.074 18:04:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:15:27.074 00:15:27.074 real 0m13.581s 00:15:27.074 user 0m10.166s 00:15:27.074 sys 0m3.222s 00:15:27.074 18:04:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:27.074 ************************************ 00:15:27.074 END TEST xnvme_bdevperf 00:15:27.074 18:04:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:27.074 ************************************ 00:15:27.074 00:15:27.074 real 0m50.469s 00:15:27.074 user 0m42.221s 00:15:27.074 sys 0m7.587s 00:15:27.074 18:04:56 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:27.074 18:04:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.074 ************************************ 00:15:27.074 END TEST nvme_xnvme 00:15:27.074 ************************************ 00:15:27.333 18:04:56 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:27.333 18:04:56 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:27.333 18:04:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:27.333 18:04:56 -- common/autotest_common.sh@10 -- # set +x 00:15:27.333 ************************************ 00:15:27.333 START TEST blockdev_xnvme 00:15:27.333 ************************************ 00:15:27.333 18:04:56 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:27.333 * Looking for test storage... 00:15:27.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:27.333 18:04:56 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:27.333 18:04:56 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:15:27.333 18:04:56 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:27.333 18:04:56 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:27.333 18:04:56 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.334 18:04:56 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.334 18:04:56 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.334 18:04:56 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.334 18:04:56 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.334 18:04:56 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.334 18:04:56 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.334 18:04:56 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.334 18:04:56 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.334 18:04:56 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.334 18:04:56 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.334 18:04:56 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:27.334 18:04:56 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:27.334 18:04:56 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.334 18:04:56 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.594 18:04:56 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:27.594 18:04:56 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:27.594 18:04:56 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.594 18:04:56 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:27.594 18:04:56 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.594 18:04:56 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:27.594 18:04:56 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:27.594 18:04:56 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.594 18:04:56 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:27.594 18:04:56 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.594 18:04:56 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.594 18:04:56 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.594 18:04:56 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:27.594 18:04:56 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.594 18:04:56 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:27.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.594 --rc genhtml_branch_coverage=1 00:15:27.594 --rc genhtml_function_coverage=1 00:15:27.594 --rc genhtml_legend=1 00:15:27.594 --rc geninfo_all_blocks=1 00:15:27.594 --rc geninfo_unexecuted_blocks=1 00:15:27.594 00:15:27.594 ' 00:15:27.594 18:04:56 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:27.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.594 --rc genhtml_branch_coverage=1 00:15:27.594 --rc genhtml_function_coverage=1 00:15:27.594 --rc genhtml_legend=1 00:15:27.594 --rc geninfo_all_blocks=1 00:15:27.594 --rc geninfo_unexecuted_blocks=1 00:15:27.594 00:15:27.594 ' 00:15:27.594 18:04:56 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:27.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.594 --rc genhtml_branch_coverage=1 00:15:27.594 --rc genhtml_function_coverage=1 00:15:27.594 --rc genhtml_legend=1 00:15:27.594 --rc geninfo_all_blocks=1 00:15:27.594 --rc geninfo_unexecuted_blocks=1 00:15:27.594 00:15:27.594 ' 00:15:27.594 18:04:56 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:27.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.594 --rc genhtml_branch_coverage=1 00:15:27.594 --rc genhtml_function_coverage=1 00:15:27.594 --rc genhtml_legend=1 00:15:27.594 --rc geninfo_all_blocks=1 00:15:27.594 --rc geninfo_unexecuted_blocks=1 00:15:27.594 00:15:27.594 ' 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=70499 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:27.594 18:04:56 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 70499 00:15:27.594 18:04:56 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 70499 ']' 00:15:27.594 18:04:56 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.594 18:04:56 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:27.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.594 18:04:56 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.594 18:04:56 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:27.594 18:04:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.594 [2024-11-05 18:04:56.796140] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:15:27.594 [2024-11-05 18:04:56.796277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70499 ] 00:15:27.854 [2024-11-05 18:04:56.975253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.854 [2024-11-05 18:04:57.079923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.791 18:04:57 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:28.791 18:04:57 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:15:28.791 18:04:57 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:28.791 18:04:57 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:15:28.791 18:04:57 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:28.791 18:04:57 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:28.791 18:04:57 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:29.361 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:29.621 Waiting for block devices as requested 00:15:29.621 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:29.621 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:29.880 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:29.880 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:35.154 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:15:35.154 nvme0n1 00:15:35.154 nvme1n1 00:15:35.154 nvme2n1 00:15:35.154 nvme2n2 00:15:35.154 nvme2n3 00:15:35.154 nvme3n1 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:35.154 18:05:04 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.154 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:35.155 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:35.155 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d8e0af6b-b4c9-4f98-b4e1-2d609735d133"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d8e0af6b-b4c9-4f98-b4e1-2d609735d133",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "63fc7a0a-0377-4c70-a93b-d0f9e69b5d9a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "63fc7a0a-0377-4c70-a93b-d0f9e69b5d9a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2df15308-fb39-4523-bc2e-665b9b17eb67"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2df15308-fb39-4523-bc2e-665b9b17eb67",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "c26a81f7-94c1-4441-a6e2-c32923990a1f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c26a81f7-94c1-4441-a6e2-c32923990a1f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "63300d51-0993-4007-a935-6105b116bfbf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "63300d51-0993-4007-a935-6105b116bfbf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1125490f-f076-4f8d-8af1-4ca0432400c4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1125490f-f076-4f8d-8af1-4ca0432400c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:35.414 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:35.414 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:35.414 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:35.414 18:05:04 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 70499 00:15:35.414 18:05:04 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 70499 ']' 00:15:35.414 18:05:04 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 70499 00:15:35.414 18:05:04 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:15:35.414 18:05:04 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:35.414 18:05:04 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70499 00:15:35.414 18:05:04 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:35.414 18:05:04 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:35.414 killing process with pid 70499 00:15:35.414 18:05:04 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70499' 00:15:35.414 18:05:04 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 70499 00:15:35.414 18:05:04 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 70499 00:15:37.949 18:05:06 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:37.949 18:05:06 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:37.949 18:05:06 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:37.949 18:05:06 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:37.949 18:05:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.949 ************************************ 00:15:37.949 START TEST bdev_hello_world 00:15:37.949 ************************************ 00:15:37.949 18:05:06 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:37.949 [2024-11-05 18:05:06.895083] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:15:37.949 [2024-11-05 18:05:06.895219] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70877 ] 00:15:37.949 [2024-11-05 18:05:07.074520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.949 [2024-11-05 18:05:07.172763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.517 [2024-11-05 18:05:07.597236] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:38.517 [2024-11-05 18:05:07.597285] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:38.517 [2024-11-05 18:05:07.597319] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:38.517 [2024-11-05 18:05:07.599399] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:38.517 [2024-11-05 18:05:07.599901] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:38.517 [2024-11-05 18:05:07.599929] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:38.517 [2024-11-05 18:05:07.600184] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:38.517 00:15:38.517 [2024-11-05 18:05:07.600211] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:39.455 00:15:39.455 real 0m1.862s 00:15:39.455 user 0m1.507s 00:15:39.455 sys 0m0.239s 00:15:39.455 18:05:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:39.455 ************************************ 00:15:39.455 END TEST bdev_hello_world 00:15:39.455 ************************************ 00:15:39.455 18:05:08 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:39.455 18:05:08 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:39.455 18:05:08 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:39.455 18:05:08 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:39.455 18:05:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.455 ************************************ 00:15:39.455 START TEST bdev_bounds 00:15:39.455 ************************************ 00:15:39.455 18:05:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:15:39.455 18:05:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=70919 00:15:39.455 18:05:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:39.455 18:05:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:39.455 Process bdevio pid: 70919 00:15:39.455 18:05:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 70919' 00:15:39.455 18:05:08 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 70919 00:15:39.455 18:05:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 70919 ']' 00:15:39.455 18:05:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.455 18:05:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:39.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.455 18:05:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.455 18:05:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:39.455 18:05:08 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:39.714 [2024-11-05 18:05:08.831531] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:15:39.714 [2024-11-05 18:05:08.831672] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70919 ] 00:15:39.714 [2024-11-05 18:05:09.015831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:39.974 [2024-11-05 18:05:09.120616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.974 [2024-11-05 18:05:09.120750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.974 [2024-11-05 18:05:09.120779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.542 18:05:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:40.542 18:05:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:15:40.542 18:05:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:40.542 I/O targets: 00:15:40.542 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:40.542 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:40.542 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:40.542 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:40.542 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:40.542 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:40.542 00:15:40.542 00:15:40.542 CUnit - A unit testing framework for C - Version 2.1-3 00:15:40.542 http://cunit.sourceforge.net/ 00:15:40.542 00:15:40.542 00:15:40.542 Suite: bdevio tests on: nvme3n1 00:15:40.542 Test: blockdev write read block ...passed 00:15:40.542 Test: blockdev write zeroes read block ...passed 00:15:40.542 Test: blockdev write zeroes read no split ...passed 00:15:40.542 Test: blockdev write zeroes read split ...passed 00:15:40.542 Test: blockdev write zeroes read split partial ...passed 00:15:40.542 Test: blockdev reset ...passed 00:15:40.542 Test: blockdev write read 8 blocks ...passed 00:15:40.542 Test: blockdev write read size > 128k ...passed 00:15:40.542 Test: blockdev write read invalid size ...passed 00:15:40.542 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:40.542 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:40.542 Test: blockdev write read max offset ...passed 00:15:40.542 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:40.542 Test: blockdev writev readv 8 blocks ...passed 00:15:40.542 Test: blockdev writev readv 30 x 1block ...passed 00:15:40.542 Test: blockdev writev readv block ...passed 00:15:40.542 Test: blockdev writev readv size > 128k ...passed 00:15:40.542 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:40.542 Test: blockdev comparev and writev ...passed 00:15:40.542 Test: blockdev nvme passthru rw ...passed 00:15:40.542 Test: blockdev nvme passthru vendor specific ...passed 00:15:40.542 Test: blockdev nvme admin passthru ...passed 00:15:40.542 Test: blockdev copy ...passed 00:15:40.542 Suite: bdevio tests on: nvme2n3 00:15:40.542 Test: blockdev write read block ...passed 00:15:40.542 Test: blockdev write zeroes read block ...passed 00:15:40.542 Test: blockdev write zeroes read no split ...passed 00:15:40.542 Test: blockdev write zeroes read split ...passed 00:15:40.801 Test: blockdev write zeroes read split partial ...passed 00:15:40.801 Test: blockdev reset ...passed 00:15:40.801 Test: blockdev write read 8 blocks ...passed 00:15:40.801 Test: blockdev write read size > 128k ...passed 00:15:40.801 Test: blockdev write read invalid size ...passed 00:15:40.801 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:40.801 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:40.801 Test: blockdev write read max offset ...passed 00:15:40.801 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:40.801 Test: blockdev writev readv 8 blocks ...passed 00:15:40.801 Test: blockdev writev readv 30 x 1block ...passed 00:15:40.801 Test: blockdev writev readv block ...passed 00:15:40.801 Test: blockdev writev readv size > 128k ...passed 00:15:40.801 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:40.801 Test: blockdev comparev and writev ...passed 00:15:40.801 Test: blockdev nvme passthru rw ...passed 00:15:40.801 Test: blockdev nvme passthru vendor specific ...passed 00:15:40.801 Test: blockdev nvme admin passthru ...passed 00:15:40.801 Test: blockdev copy ...passed 00:15:40.801 Suite: bdevio tests on: nvme2n2 00:15:40.801 Test: blockdev write read block ...passed 00:15:40.801 Test: blockdev write zeroes read block ...passed 00:15:40.801 Test: blockdev write zeroes read no split ...passed 00:15:40.801 Test: blockdev write zeroes read split ...passed 00:15:40.801 Test: blockdev write zeroes read split partial ...passed 00:15:40.801 Test: blockdev reset ...passed 00:15:40.801 Test: blockdev write read 8 blocks ...passed 00:15:40.801 Test: blockdev write read size > 128k ...passed 00:15:40.801 Test: blockdev write read invalid size ...passed 00:15:40.801 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:40.801 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:40.801 Test: blockdev write read max offset ...passed 00:15:40.801 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:40.801 Test: blockdev writev readv 8 blocks ...passed 00:15:40.801 Test: blockdev writev readv 30 x 1block ...passed 00:15:40.801 Test: blockdev writev readv block ...passed 00:15:40.801 Test: blockdev writev readv size > 128k ...passed 00:15:40.801 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:40.801 Test: blockdev comparev and writev ...passed 00:15:40.801 Test: blockdev nvme passthru rw ...passed 00:15:40.801 Test: blockdev nvme passthru vendor specific ...passed 00:15:40.801 Test: blockdev nvme admin passthru ...passed 00:15:40.801 Test: blockdev copy ...passed 00:15:40.801 Suite: bdevio tests on: nvme2n1 00:15:40.801 Test: blockdev write read block ...passed 00:15:40.801 Test: blockdev write zeroes read block ...passed 00:15:40.801 Test: blockdev write zeroes read no split ...passed 00:15:40.801 Test: blockdev write zeroes read split ...passed 00:15:40.801 Test: blockdev write zeroes read split partial ...passed 00:15:40.801 Test: blockdev reset ...passed 00:15:40.801 Test: blockdev write read 8 blocks ...passed 00:15:40.801 Test: blockdev write read size > 128k ...passed 00:15:40.801 Test: blockdev write read invalid size ...passed 00:15:40.801 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:40.801 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:40.801 Test: blockdev write read max offset ...passed 00:15:40.801 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:40.801 Test: blockdev writev readv 8 blocks ...passed 00:15:40.801 Test: blockdev writev readv 30 x 1block ...passed 00:15:40.801 Test: blockdev writev readv block ...passed 00:15:40.801 Test: blockdev writev readv size > 128k ...passed 00:15:40.801 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:40.801 Test: blockdev comparev and writev ...passed 00:15:40.801 Test: blockdev nvme passthru rw ...passed 00:15:40.801 Test: blockdev nvme passthru vendor specific ...passed 00:15:40.801 Test: blockdev nvme admin passthru ...passed 00:15:40.801 Test: blockdev copy ...passed 00:15:40.801 Suite: bdevio tests on: nvme1n1 00:15:40.801 Test: blockdev write read block ...passed 00:15:40.801 Test: blockdev write zeroes read block ...passed 00:15:40.801 Test: blockdev write zeroes read no split ...passed 00:15:40.801 Test: blockdev write zeroes read split ...passed 00:15:40.801 Test: blockdev write zeroes read split partial ...passed 00:15:40.801 Test: blockdev reset ...passed 00:15:40.801 Test: blockdev write read 8 blocks ...passed 00:15:40.801 Test: blockdev write read size > 128k ...passed 00:15:40.801 Test: blockdev write read invalid size ...passed 00:15:40.801 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:40.801 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:40.801 Test: blockdev write read max offset ...passed 00:15:40.801 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:40.801 Test: blockdev writev readv 8 blocks ...passed 00:15:40.801 Test: blockdev writev readv 30 x 1block ...passed 00:15:40.801 Test: blockdev writev readv block ...passed 00:15:40.801 Test: blockdev writev readv size > 128k ...passed 00:15:40.801 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:40.801 Test: blockdev comparev and writev ...passed 00:15:40.801 Test: blockdev nvme passthru rw ...passed 00:15:40.801 Test: blockdev nvme passthru vendor specific ...passed 00:15:40.801 Test: blockdev nvme admin passthru ...passed 00:15:40.801 Test: blockdev copy ...passed 00:15:40.801 Suite: bdevio tests on: nvme0n1 00:15:40.801 Test: blockdev write read block ...passed 00:15:40.801 Test: blockdev write zeroes read block ...passed 00:15:40.801 Test: blockdev write zeroes read no split ...passed 00:15:41.061 Test: blockdev write zeroes read split ...passed 00:15:41.061 Test: blockdev write zeroes read split partial ...passed 00:15:41.061 Test: blockdev reset ...passed 00:15:41.061 Test: blockdev write read 8 blocks ...passed 00:15:41.061 Test: blockdev write read size > 128k ...passed 00:15:41.061 Test: blockdev write read invalid size ...passed 00:15:41.061 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:41.061 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:41.061 Test: blockdev write read max offset ...passed 00:15:41.061 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:41.061 Test: blockdev writev readv 8 blocks ...passed 00:15:41.061 Test: blockdev writev readv 30 x 1block ...passed 00:15:41.061 Test: blockdev writev readv block ...passed 00:15:41.061 Test: blockdev writev readv size > 128k ...passed 00:15:41.061 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:41.061 Test: blockdev comparev and writev ...passed 00:15:41.061 Test: blockdev nvme passthru rw ...passed 00:15:41.061 Test: blockdev nvme passthru vendor specific ...passed 00:15:41.061 Test: blockdev nvme admin passthru ...passed 00:15:41.061 Test: blockdev copy ...passed 00:15:41.061 00:15:41.061 Run Summary: Type Total Ran Passed Failed Inactive 00:15:41.061 suites 6 6 n/a 0 0 00:15:41.061 tests 138 138 138 0 0 00:15:41.061 asserts 780 780 780 0 n/a 00:15:41.061 00:15:41.061 Elapsed time = 1.258 seconds 00:15:41.061 0 00:15:41.061 18:05:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 70919 00:15:41.061 18:05:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 70919 ']' 00:15:41.061 18:05:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 70919 00:15:41.061 18:05:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:15:41.061 18:05:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:41.061 18:05:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70919 00:15:41.061 18:05:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:41.061 18:05:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:41.061 killing process with pid 70919 00:15:41.061 18:05:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70919' 00:15:41.061 18:05:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 70919 00:15:41.061 18:05:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 70919 00:15:42.448 18:05:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:42.448 00:15:42.448 real 0m2.597s 00:15:42.448 user 0m6.442s 00:15:42.448 sys 0m0.380s 00:15:42.448 18:05:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:42.448 18:05:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:42.448 ************************************ 00:15:42.448 END TEST bdev_bounds 00:15:42.448 ************************************ 00:15:42.448 18:05:11 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:42.448 18:05:11 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:42.448 18:05:11 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:42.448 18:05:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:42.448 ************************************ 00:15:42.448 START TEST bdev_nbd 00:15:42.448 ************************************ 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=70973 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 70973 /var/tmp/spdk-nbd.sock 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 70973 ']' 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:42.448 18:05:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:42.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:42.449 18:05:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:42.449 18:05:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:42.449 18:05:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:42.449 [2024-11-05 18:05:11.516355] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:15:42.449 [2024-11-05 18:05:11.516519] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.449 [2024-11-05 18:05:11.699244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.708 [2024-11-05 18:05:11.806605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:43.277 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:43.278 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:43.278 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:43.278 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:43.278 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:43.278 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:43.278 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.278 1+0 records in 00:15:43.278 1+0 records out 00:15:43.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637148 s, 6.4 MB/s 00:15:43.278 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.278 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:43.278 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.278 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:43.278 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:43.278 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:43.278 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:43.278 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.537 1+0 records in 00:15:43.537 1+0 records out 00:15:43.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00298965 s, 1.4 MB/s 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:43.537 18:05:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:43.797 1+0 records in 00:15:43.797 1+0 records out 00:15:43.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557706 s, 7.3 MB/s 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:43.797 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.056 1+0 records in 00:15:44.056 1+0 records out 00:15:44.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000669203 s, 6.1 MB/s 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.056 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.316 1+0 records in 00:15:44.316 1+0 records out 00:15:44.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000829929 s, 4.9 MB/s 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.316 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.575 1+0 records in 00:15:44.575 1+0 records out 00:15:44.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805364 s, 5.1 MB/s 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:44.575 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:44.576 18:05:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:44.835 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:44.835 { 00:15:44.835 "nbd_device": "/dev/nbd0", 00:15:44.835 "bdev_name": "nvme0n1" 00:15:44.835 }, 00:15:44.835 { 00:15:44.835 "nbd_device": "/dev/nbd1", 00:15:44.835 "bdev_name": "nvme1n1" 00:15:44.835 }, 00:15:44.835 { 00:15:44.835 "nbd_device": "/dev/nbd2", 00:15:44.835 "bdev_name": "nvme2n1" 00:15:44.835 }, 00:15:44.835 { 00:15:44.835 "nbd_device": "/dev/nbd3", 00:15:44.835 "bdev_name": "nvme2n2" 00:15:44.835 }, 00:15:44.835 { 00:15:44.835 "nbd_device": "/dev/nbd4", 00:15:44.835 "bdev_name": "nvme2n3" 00:15:44.835 }, 00:15:44.835 { 00:15:44.835 "nbd_device": "/dev/nbd5", 00:15:44.835 "bdev_name": "nvme3n1" 00:15:44.835 } 00:15:44.835 ]' 00:15:44.835 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:44.835 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:44.835 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:44.835 { 00:15:44.835 "nbd_device": "/dev/nbd0", 00:15:44.835 "bdev_name": "nvme0n1" 00:15:44.835 }, 00:15:44.835 { 00:15:44.835 "nbd_device": "/dev/nbd1", 00:15:44.835 "bdev_name": "nvme1n1" 00:15:44.835 }, 00:15:44.835 { 00:15:44.835 "nbd_device": "/dev/nbd2", 00:15:44.835 "bdev_name": "nvme2n1" 00:15:44.835 }, 00:15:44.835 { 00:15:44.835 "nbd_device": "/dev/nbd3", 00:15:44.835 "bdev_name": "nvme2n2" 00:15:44.835 }, 00:15:44.835 { 00:15:44.835 "nbd_device": "/dev/nbd4", 00:15:44.835 "bdev_name": "nvme2n3" 00:15:44.835 }, 00:15:44.835 { 00:15:44.835 "nbd_device": "/dev/nbd5", 00:15:44.835 "bdev_name": "nvme3n1" 00:15:44.835 } 00:15:44.835 ]' 00:15:44.835 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:44.835 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:44.835 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:44.835 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:44.835 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:44.835 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.835 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:45.094 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:45.094 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:45.094 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:45.094 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.094 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.094 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:45.094 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.094 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.094 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.094 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.354 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:45.613 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:45.613 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:45.613 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:45.613 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.613 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.613 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:45.613 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.613 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.613 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.613 18:05:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:45.873 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:45.873 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:45.873 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:45.873 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.873 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.873 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:45.873 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:45.873 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.873 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.873 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:46.132 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:46.132 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:46.132 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:46.132 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:46.132 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:46.132 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:46.132 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:46.132 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:46.132 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:46.132 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.132 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:46.392 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:46.652 /dev/nbd0 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.652 1+0 records in 00:15:46.652 1+0 records out 00:15:46.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645575 s, 6.3 MB/s 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:46.652 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:15:46.652 /dev/nbd1 00:15:46.912 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:46.912 18:05:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:46.912 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:46.912 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:46.912 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:46.912 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:46.912 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:46.912 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:46.912 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:46.912 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:46.912 18:05:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:46.912 1+0 records in 00:15:46.912 1+0 records out 00:15:46.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000966147 s, 4.2 MB/s 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:15:46.912 /dev/nbd10 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:46.912 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.171 1+0 records in 00:15:47.171 1+0 records out 00:15:47.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596357 s, 6.9 MB/s 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:15:47.171 /dev/nbd11 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:47.171 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.171 1+0 records in 00:15:47.171 1+0 records out 00:15:47.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674684 s, 6.1 MB/s 00:15:47.430 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.430 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:47.430 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.430 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:47.430 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:15:47.431 /dev/nbd12 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.431 1+0 records in 00:15:47.431 1+0 records out 00:15:47.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000720232 s, 5.7 MB/s 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:47.431 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:47.692 /dev/nbd13 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.692 1+0 records in 00:15:47.692 1+0 records out 00:15:47.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000890762 s, 4.6 MB/s 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:47.692 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.693 18:05:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:47.952 { 00:15:47.952 "nbd_device": "/dev/nbd0", 00:15:47.952 "bdev_name": "nvme0n1" 00:15:47.952 }, 00:15:47.952 { 00:15:47.952 "nbd_device": "/dev/nbd1", 00:15:47.952 "bdev_name": "nvme1n1" 00:15:47.952 }, 00:15:47.952 { 00:15:47.952 "nbd_device": "/dev/nbd10", 00:15:47.952 "bdev_name": "nvme2n1" 00:15:47.952 }, 00:15:47.952 { 00:15:47.952 "nbd_device": "/dev/nbd11", 00:15:47.952 "bdev_name": "nvme2n2" 00:15:47.952 }, 00:15:47.952 { 00:15:47.952 "nbd_device": "/dev/nbd12", 00:15:47.952 "bdev_name": "nvme2n3" 00:15:47.952 }, 00:15:47.952 { 00:15:47.952 "nbd_device": "/dev/nbd13", 00:15:47.952 "bdev_name": "nvme3n1" 00:15:47.952 } 00:15:47.952 ]' 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:47.952 { 00:15:47.952 "nbd_device": "/dev/nbd0", 00:15:47.952 "bdev_name": "nvme0n1" 00:15:47.952 }, 00:15:47.952 { 00:15:47.952 "nbd_device": "/dev/nbd1", 00:15:47.952 "bdev_name": "nvme1n1" 00:15:47.952 }, 00:15:47.952 { 00:15:47.952 "nbd_device": "/dev/nbd10", 00:15:47.952 "bdev_name": "nvme2n1" 00:15:47.952 }, 00:15:47.952 { 00:15:47.952 "nbd_device": "/dev/nbd11", 00:15:47.952 "bdev_name": "nvme2n2" 00:15:47.952 }, 00:15:47.952 { 00:15:47.952 "nbd_device": "/dev/nbd12", 00:15:47.952 "bdev_name": "nvme2n3" 00:15:47.952 }, 00:15:47.952 { 00:15:47.952 "nbd_device": "/dev/nbd13", 00:15:47.952 "bdev_name": "nvme3n1" 00:15:47.952 } 00:15:47.952 ]' 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:47.952 /dev/nbd1 00:15:47.952 /dev/nbd10 00:15:47.952 /dev/nbd11 00:15:47.952 /dev/nbd12 00:15:47.952 /dev/nbd13' 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:47.952 /dev/nbd1 00:15:47.952 /dev/nbd10 00:15:47.952 /dev/nbd11 00:15:47.952 /dev/nbd12 00:15:47.952 /dev/nbd13' 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:47.952 256+0 records in 00:15:47.952 256+0 records out 00:15:47.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00628273 s, 167 MB/s 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:47.952 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:48.212 256+0 records in 00:15:48.212 256+0 records out 00:15:48.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122236 s, 8.6 MB/s 00:15:48.212 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:48.212 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:48.212 256+0 records in 00:15:48.212 256+0 records out 00:15:48.212 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155126 s, 6.8 MB/s 00:15:48.212 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:48.212 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:48.471 256+0 records in 00:15:48.471 256+0 records out 00:15:48.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124456 s, 8.4 MB/s 00:15:48.471 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:48.471 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:48.471 256+0 records in 00:15:48.471 256+0 records out 00:15:48.471 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125725 s, 8.3 MB/s 00:15:48.471 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:48.471 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:48.731 256+0 records in 00:15:48.731 256+0 records out 00:15:48.731 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129486 s, 8.1 MB/s 00:15:48.731 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:48.731 18:05:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:48.990 256+0 records in 00:15:48.990 256+0 records out 00:15:48.990 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124385 s, 8.4 MB/s 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.990 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.250 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:49.509 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:49.509 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:49.509 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:49.509 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.509 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.509 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:49.509 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:49.509 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.509 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.509 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:49.769 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:49.769 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:49.769 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:49.769 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.769 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.769 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:49.769 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:49.769 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.769 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.769 18:05:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:50.028 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:50.028 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:50.028 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:50.028 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.028 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.028 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:50.028 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.028 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.028 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.028 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:50.288 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:50.288 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:50.288 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:50.288 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.288 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.288 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:50.288 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.288 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.288 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:50.288 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.288 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:50.288 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:50.288 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:50.288 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:50.547 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:50.547 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:50.547 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:50.547 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:50.547 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:50.547 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:50.547 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:50.547 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:50.547 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:50.547 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:50.547 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:50.547 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:50.547 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:50.547 malloc_lvol_verify 00:15:50.547 18:05:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:50.806 ddf29cb2-cb52-416a-848e-0fb6d6957acb 00:15:50.806 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:51.065 5a841d76-7919-4c9f-ba7c-e12845ca9e85 00:15:51.065 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:51.324 /dev/nbd0 00:15:51.324 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:51.324 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:51.324 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:51.324 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:51.324 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:51.324 mke2fs 1.47.0 (5-Feb-2023) 00:15:51.324 Discarding device blocks: 0/4096 done 00:15:51.324 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:51.324 00:15:51.324 Allocating group tables: 0/1 done 00:15:51.324 Writing inode tables: 0/1 done 00:15:51.324 Creating journal (1024 blocks): done 00:15:51.324 Writing superblocks and filesystem accounting information: 0/1 done 00:15:51.324 00:15:51.324 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:51.324 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:51.324 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:51.324 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.324 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:51.324 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.324 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 70973 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 70973 ']' 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 70973 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70973 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:51.583 killing process with pid 70973 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70973' 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 70973 00:15:51.583 18:05:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 70973 00:15:52.963 ************************************ 00:15:52.963 END TEST bdev_nbd 00:15:52.963 ************************************ 00:15:52.963 18:05:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:52.963 00:15:52.963 real 0m10.452s 00:15:52.963 user 0m13.330s 00:15:52.963 sys 0m4.535s 00:15:52.963 18:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:52.963 18:05:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:52.963 18:05:21 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:52.963 18:05:21 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:15:52.963 18:05:21 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:15:52.963 18:05:21 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:52.963 18:05:21 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:52.963 18:05:21 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:52.963 18:05:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.963 ************************************ 00:15:52.963 START TEST bdev_fio 00:15:52.963 ************************************ 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:52.963 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:15:52.963 18:05:21 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:15:52.963 18:05:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:52.963 18:05:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:15:52.963 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:52.963 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:52.963 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:52.963 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:52.963 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:52.963 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:52.964 ************************************ 00:15:52.964 START TEST bdev_fio_rw_verify 00:15:52.964 ************************************ 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:52.964 18:05:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:52.964 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.964 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.964 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.964 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.964 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.964 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:52.964 fio-3.35 00:15:52.964 Starting 6 threads 00:16:05.177 00:16:05.177 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=71380: Tue Nov 5 18:05:33 2024 00:16:05.177 read: IOPS=32.6k, BW=127MiB/s (134MB/s)(1273MiB/10001msec) 00:16:05.177 slat (usec): min=2, max=1043, avg= 6.27, stdev= 3.17 00:16:05.177 clat (usec): min=92, max=2820, avg=612.83, stdev=134.42 00:16:05.177 lat (usec): min=97, max=2829, avg=619.09, stdev=135.00 00:16:05.177 clat percentiles (usec): 00:16:05.177 | 50.000th=[ 635], 99.000th=[ 914], 99.900th=[ 1319], 99.990th=[ 2008], 00:16:05.177 | 99.999th=[ 2802] 00:16:05.177 write: IOPS=32.8k, BW=128MiB/s (134MB/s)(1280MiB/10001msec); 0 zone resets 00:16:05.177 slat (usec): min=11, max=869, avg=17.50, stdev=11.52 00:16:05.177 clat (usec): min=78, max=3205, avg=668.27, stdev=131.11 00:16:05.177 lat (usec): min=93, max=3220, avg=685.77, stdev=131.47 00:16:05.177 clat percentiles (usec): 00:16:05.177 | 50.000th=[ 676], 99.000th=[ 1012], 99.900th=[ 1663], 99.990th=[ 2311], 00:16:05.177 | 99.999th=[ 2999] 00:16:05.177 bw ( KiB/s): min=108659, max=147649, per=100.00%, avg=131777.05, stdev=2076.82, samples=114 00:16:05.177 iops : min=27164, max=36912, avg=32944.00, stdev=519.20, samples=114 00:16:05.177 lat (usec) : 100=0.01%, 250=2.31%, 500=7.28%, 750=79.17%, 1000=10.47% 00:16:05.177 lat (msec) : 2=0.74%, 4=0.03% 00:16:05.177 cpu : usr=65.24%, sys=26.32%, ctx=7238, majf=0, minf=27045 00:16:05.177 IO depths : 1=12.3%, 2=24.8%, 4=50.2%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.177 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.177 issued rwts: total=325965,327587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.177 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:05.177 00:16:05.177 Run status group 0 (all jobs): 00:16:05.177 READ: bw=127MiB/s (134MB/s), 127MiB/s-127MiB/s (134MB/s-134MB/s), io=1273MiB (1335MB), run=10001-10001msec 00:16:05.177 WRITE: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=1280MiB (1342MB), run=10001-10001msec 00:16:05.177 ----------------------------------------------------- 00:16:05.177 Suppressions used: 00:16:05.177 count bytes template 00:16:05.177 6 48 /usr/src/fio/parse.c 00:16:05.177 1387 133152 /usr/src/fio/iolog.c 00:16:05.177 1 8 libtcmalloc_minimal.so 00:16:05.177 1 904 libcrypto.so 00:16:05.177 ----------------------------------------------------- 00:16:05.177 00:16:05.177 00:16:05.177 real 0m12.407s 00:16:05.177 user 0m41.044s 00:16:05.177 sys 0m16.217s 00:16:05.177 18:05:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:05.177 18:05:34 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:05.177 ************************************ 00:16:05.177 END TEST bdev_fio_rw_verify 00:16:05.177 ************************************ 00:16:05.177 18:05:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d8e0af6b-b4c9-4f98-b4e1-2d609735d133"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d8e0af6b-b4c9-4f98-b4e1-2d609735d133",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "63fc7a0a-0377-4c70-a93b-d0f9e69b5d9a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "63fc7a0a-0377-4c70-a93b-d0f9e69b5d9a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2df15308-fb39-4523-bc2e-665b9b17eb67"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2df15308-fb39-4523-bc2e-665b9b17eb67",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "c26a81f7-94c1-4441-a6e2-c32923990a1f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c26a81f7-94c1-4441-a6e2-c32923990a1f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "63300d51-0993-4007-a935-6105b116bfbf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "63300d51-0993-4007-a935-6105b116bfbf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "1125490f-f076-4f8d-8af1-4ca0432400c4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1125490f-f076-4f8d-8af1-4ca0432400c4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:05.478 /home/vagrant/spdk_repo/spdk 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:05.478 00:16:05.478 real 0m12.623s 00:16:05.478 user 0m41.154s 00:16:05.478 sys 0m16.328s 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:05.478 18:05:34 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:05.478 ************************************ 00:16:05.478 END TEST bdev_fio 00:16:05.478 ************************************ 00:16:05.478 18:05:34 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:05.478 18:05:34 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:05.478 18:05:34 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:16:05.478 18:05:34 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:05.478 18:05:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:05.478 ************************************ 00:16:05.478 START TEST bdev_verify 00:16:05.478 ************************************ 00:16:05.478 18:05:34 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:05.478 [2024-11-05 18:05:34.734767] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:16:05.478 [2024-11-05 18:05:34.734896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71557 ] 00:16:05.737 [2024-11-05 18:05:34.915620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:05.737 [2024-11-05 18:05:35.021363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.737 [2024-11-05 18:05:35.021397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.306 Running I/O for 5 seconds... 00:16:08.622 24222.00 IOPS, 94.62 MiB/s [2024-11-05T18:05:38.883Z] 26355.50 IOPS, 102.95 MiB/s [2024-11-05T18:05:39.821Z] 24726.67 IOPS, 96.59 MiB/s [2024-11-05T18:05:40.759Z] 25639.75 IOPS, 100.16 MiB/s [2024-11-05T18:05:40.759Z] 25722.40 IOPS, 100.48 MiB/s 00:16:11.436 Latency(us) 00:16:11.436 [2024-11-05T18:05:40.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.436 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.436 Verification LBA range: start 0x0 length 0xa0000 00:16:11.436 nvme0n1 : 5.01 2017.51 7.88 0.00 0.00 63339.92 10422.59 82538.51 00:16:11.436 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:11.436 Verification LBA range: start 0xa0000 length 0xa0000 00:16:11.436 nvme0n1 : 5.03 2110.86 8.25 0.00 0.00 60537.08 8738.13 73695.10 00:16:11.436 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.436 Verification LBA range: start 0x0 length 0xbd0bd 00:16:11.436 nvme1n1 : 5.05 2588.61 10.11 0.00 0.00 49168.59 3921.63 193712.84 00:16:11.436 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:11.436 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:11.436 nvme1n1 : 5.06 2347.29 9.17 0.00 0.00 54282.25 3553.16 211399.66 00:16:11.436 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.436 Verification LBA range: start 0x0 length 0x80000 00:16:11.436 nvme2n1 : 5.05 2002.27 7.82 0.00 0.00 63655.78 4579.62 81275.17 00:16:11.436 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:11.436 Verification LBA range: start 0x80000 length 0x80000 00:16:11.436 nvme2n1 : 5.06 2072.31 8.09 0.00 0.00 61256.19 6211.44 74116.22 00:16:11.436 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.436 Verification LBA range: start 0x0 length 0x80000 00:16:11.436 nvme2n2 : 5.05 2027.13 7.92 0.00 0.00 62703.62 5079.70 72852.87 00:16:11.436 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:11.436 Verification LBA range: start 0x80000 length 0x80000 00:16:11.436 nvme2n2 : 5.07 2071.83 8.09 0.00 0.00 61134.23 7895.90 69905.07 00:16:11.436 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.436 Verification LBA range: start 0x0 length 0x80000 00:16:11.436 nvme2n3 : 5.06 2024.32 7.91 0.00 0.00 62668.32 4395.39 73273.99 00:16:11.436 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:11.436 Verification LBA range: start 0x80000 length 0x80000 00:16:11.436 nvme2n3 : 5.07 2071.33 8.09 0.00 0.00 61074.25 8317.02 72431.76 00:16:11.436 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.436 Verification LBA range: start 0x0 length 0x20000 00:16:11.436 nvme3n1 : 5.05 2028.11 7.92 0.00 0.00 62443.74 4237.47 76221.79 00:16:11.436 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:11.436 Verification LBA range: start 0x20000 length 0x20000 00:16:11.436 nvme3n1 : 5.07 2094.55 8.18 0.00 0.00 60360.91 3579.48 72431.76 00:16:11.436 [2024-11-05T18:05:40.759Z] =================================================================================================================== 00:16:11.436 [2024-11-05T18:05:40.759Z] Total : 25456.11 99.44 0.00 0.00 59901.40 3553.16 211399.66 00:16:12.816 00:16:12.816 real 0m7.073s 00:16:12.816 user 0m10.602s 00:16:12.816 sys 0m2.193s 00:16:12.816 18:05:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:12.816 18:05:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:12.816 ************************************ 00:16:12.816 END TEST bdev_verify 00:16:12.816 ************************************ 00:16:12.816 18:05:41 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:12.816 18:05:41 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:16:12.816 18:05:41 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:12.816 18:05:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.816 ************************************ 00:16:12.816 START TEST bdev_verify_big_io 00:16:12.816 ************************************ 00:16:12.816 18:05:41 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:12.816 [2024-11-05 18:05:41.887458] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:16:12.816 [2024-11-05 18:05:41.887603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71657 ] 00:16:12.816 [2024-11-05 18:05:42.069887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:13.075 [2024-11-05 18:05:42.178441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.075 [2024-11-05 18:05:42.178492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.644 Running I/O for 5 seconds... 00:16:19.471 2264.00 IOPS, 141.50 MiB/s [2024-11-05T18:05:48.794Z] 3412.00 IOPS, 213.25 MiB/s [2024-11-05T18:05:48.794Z] 3694.00 IOPS, 230.88 MiB/s 00:16:19.471 Latency(us) 00:16:19.471 [2024-11-05T18:05:48.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.471 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.471 Verification LBA range: start 0x0 length 0xa000 00:16:19.471 nvme0n1 : 5.70 154.25 9.64 0.00 0.00 792988.23 36215.88 1111743.23 00:16:19.471 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.471 Verification LBA range: start 0xa000 length 0xa000 00:16:19.471 nvme0n1 : 5.72 145.36 9.09 0.00 0.00 852962.07 114543.24 929821.61 00:16:19.471 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.471 Verification LBA range: start 0x0 length 0xbd0b 00:16:19.471 nvme1n1 : 5.76 175.13 10.95 0.00 0.00 698492.94 47164.86 798433.77 00:16:19.471 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.471 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:19.471 nvme1n1 : 5.73 178.60 11.16 0.00 0.00 689218.60 9369.81 990462.15 00:16:19.471 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.471 Verification LBA range: start 0x0 length 0x8000 00:16:19.471 nvme2n1 : 5.76 141.73 8.86 0.00 0.00 843596.66 44848.73 1185859.44 00:16:19.471 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.471 Verification LBA range: start 0x8000 length 0x8000 00:16:19.471 nvme2n1 : 5.74 150.57 9.41 0.00 0.00 786219.74 90118.58 1327354.04 00:16:19.471 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.471 Verification LBA range: start 0x0 length 0x8000 00:16:19.471 nvme2n2 : 5.75 162.82 10.18 0.00 0.00 710301.99 35584.21 734424.31 00:16:19.471 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.471 Verification LBA range: start 0x8000 length 0x8000 00:16:19.471 nvme2n2 : 5.74 150.54 9.41 0.00 0.00 779470.75 9685.64 1913545.92 00:16:19.471 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.471 Verification LBA range: start 0x0 length 0x8000 00:16:19.471 nvme2n3 : 5.76 174.97 10.94 0.00 0.00 651339.99 2395.09 1037627.01 00:16:19.471 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.471 Verification LBA range: start 0x8000 length 0x8000 00:16:19.471 nvme2n3 : 5.74 153.42 9.59 0.00 0.00 749318.43 5448.17 1711410.79 00:16:19.471 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:19.471 Verification LBA range: start 0x0 length 0x2000 00:16:19.471 nvme3n1 : 5.77 183.17 11.45 0.00 0.00 609218.61 9843.56 896132.42 00:16:19.471 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:19.471 Verification LBA range: start 0x2000 length 0x2000 00:16:19.471 nvme3n1 : 5.74 220.20 13.76 0.00 0.00 509871.94 9264.53 791695.94 00:16:19.471 [2024-11-05T18:05:48.794Z] =================================================================================================================== 00:16:19.471 [2024-11-05T18:05:48.794Z] Total : 1990.78 124.42 0.00 0.00 710905.60 2395.09 1913545.92 00:16:20.849 00:16:20.849 real 0m8.114s 00:16:20.849 user 0m14.635s 00:16:20.849 sys 0m0.626s 00:16:20.849 18:05:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:20.849 18:05:49 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.849 ************************************ 00:16:20.849 END TEST bdev_verify_big_io 00:16:20.849 ************************************ 00:16:20.849 18:05:49 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:20.849 18:05:49 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:20.849 18:05:49 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:20.849 18:05:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:20.849 ************************************ 00:16:20.849 START TEST bdev_write_zeroes 00:16:20.849 ************************************ 00:16:20.849 18:05:49 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:20.849 [2024-11-05 18:05:50.081488] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:16:20.849 [2024-11-05 18:05:50.081615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71767 ] 00:16:21.108 [2024-11-05 18:05:50.261803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.108 [2024-11-05 18:05:50.369363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.676 Running I/O for 1 seconds... 00:16:22.614 48384.00 IOPS, 189.00 MiB/s 00:16:22.614 Latency(us) 00:16:22.614 [2024-11-05T18:05:51.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.614 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:22.614 nvme0n1 : 1.05 7335.45 28.65 0.00 0.00 17434.97 9475.08 33899.75 00:16:22.614 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:22.614 nvme1n1 : 1.05 10991.37 42.94 0.00 0.00 11626.06 4158.51 38742.57 00:16:22.614 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:22.614 nvme2n1 : 1.03 7352.45 28.72 0.00 0.00 17306.61 7632.71 34320.86 00:16:22.614 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:22.614 nvme2n2 : 1.05 7320.73 28.60 0.00 0.00 17334.41 5474.49 37479.22 00:16:22.614 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:22.614 nvme2n3 : 1.05 7311.18 28.56 0.00 0.00 17344.34 5448.17 40005.91 00:16:22.614 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:22.614 nvme3n1 : 1.05 7301.68 28.52 0.00 0.00 17352.27 5448.17 41690.37 00:16:22.614 [2024-11-05T18:05:51.937Z] =================================================================================================================== 00:16:22.614 [2024-11-05T18:05:51.937Z] Total : 47612.85 185.99 0.00 0.00 16032.13 4158.51 41690.37 00:16:23.995 00:16:23.995 real 0m2.990s 00:16:23.995 user 0m2.206s 00:16:23.995 sys 0m0.595s 00:16:23.995 18:05:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:23.995 18:05:52 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:23.995 ************************************ 00:16:23.995 END TEST bdev_write_zeroes 00:16:23.995 ************************************ 00:16:23.995 18:05:53 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:23.995 18:05:53 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:23.995 18:05:53 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:23.995 18:05:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.995 ************************************ 00:16:23.995 START TEST bdev_json_nonenclosed 00:16:23.995 ************************************ 00:16:23.995 18:05:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:23.995 [2024-11-05 18:05:53.133519] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:16:23.995 [2024-11-05 18:05:53.133661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71826 ] 00:16:23.995 [2024-11-05 18:05:53.316375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.254 [2024-11-05 18:05:53.423019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.254 [2024-11-05 18:05:53.423116] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:24.254 [2024-11-05 18:05:53.423138] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:24.254 [2024-11-05 18:05:53.423150] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:24.513 00:16:24.513 real 0m0.618s 00:16:24.513 user 0m0.385s 00:16:24.513 sys 0m0.129s 00:16:24.513 18:05:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:24.513 18:05:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:24.513 ************************************ 00:16:24.513 END TEST bdev_json_nonenclosed 00:16:24.513 ************************************ 00:16:24.513 18:05:53 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:24.513 18:05:53 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:16:24.513 18:05:53 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:24.513 18:05:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:24.513 ************************************ 00:16:24.513 START TEST bdev_json_nonarray 00:16:24.513 ************************************ 00:16:24.513 18:05:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:24.513 [2024-11-05 18:05:53.834379] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:16:24.513 [2024-11-05 18:05:53.834642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71851 ] 00:16:24.772 [2024-11-05 18:05:54.015361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.032 [2024-11-05 18:05:54.121319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.032 [2024-11-05 18:05:54.121444] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:25.032 [2024-11-05 18:05:54.121469] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:25.032 [2024-11-05 18:05:54.121481] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:25.291 00:16:25.291 real 0m0.621s 00:16:25.291 user 0m0.389s 00:16:25.291 sys 0m0.128s 00:16:25.291 18:05:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:25.291 18:05:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:25.291 ************************************ 00:16:25.291 END TEST bdev_json_nonarray 00:16:25.291 ************************************ 00:16:25.291 18:05:54 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:25.291 18:05:54 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:25.291 18:05:54 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:25.291 18:05:54 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:25.291 18:05:54 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:25.291 18:05:54 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:25.291 18:05:54 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:25.291 18:05:54 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:25.291 18:05:54 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:25.291 18:05:54 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:25.291 18:05:54 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:25.291 18:05:54 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:25.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:27.239 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:27.239 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:27.239 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:27.239 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:27.498 00:16:27.498 real 1m0.198s 00:16:27.498 user 1m42.049s 00:16:27.498 sys 0m29.594s 00:16:27.498 18:05:56 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:27.498 18:05:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.498 ************************************ 00:16:27.498 END TEST blockdev_xnvme 00:16:27.498 ************************************ 00:16:27.498 18:05:56 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:27.498 18:05:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:27.498 18:05:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:27.498 18:05:56 -- common/autotest_common.sh@10 -- # set +x 00:16:27.498 ************************************ 00:16:27.498 START TEST ublk 00:16:27.498 ************************************ 00:16:27.498 18:05:56 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:27.758 * Looking for test storage... 00:16:27.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:27.758 18:05:56 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:27.758 18:05:56 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:16:27.758 18:05:56 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:27.758 18:05:56 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:27.758 18:05:56 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.758 18:05:56 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.758 18:05:56 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.758 18:05:56 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.758 18:05:56 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.758 18:05:56 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.758 18:05:56 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.758 18:05:56 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.758 18:05:56 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.758 18:05:56 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.758 18:05:56 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.758 18:05:56 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:27.758 18:05:56 ublk -- scripts/common.sh@345 -- # : 1 00:16:27.758 18:05:56 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.758 18:05:56 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.758 18:05:56 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:27.758 18:05:56 ublk -- scripts/common.sh@353 -- # local d=1 00:16:27.758 18:05:56 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.758 18:05:56 ublk -- scripts/common.sh@355 -- # echo 1 00:16:27.758 18:05:56 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.758 18:05:56 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:27.758 18:05:56 ublk -- scripts/common.sh@353 -- # local d=2 00:16:27.758 18:05:56 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.758 18:05:56 ublk -- scripts/common.sh@355 -- # echo 2 00:16:27.758 18:05:56 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.758 18:05:56 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.758 18:05:56 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.758 18:05:56 ublk -- scripts/common.sh@368 -- # return 0 00:16:27.758 18:05:56 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.758 18:05:56 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:27.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.758 --rc genhtml_branch_coverage=1 00:16:27.758 --rc genhtml_function_coverage=1 00:16:27.758 --rc genhtml_legend=1 00:16:27.758 --rc geninfo_all_blocks=1 00:16:27.758 --rc geninfo_unexecuted_blocks=1 00:16:27.758 00:16:27.758 ' 00:16:27.758 18:05:56 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:27.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.758 --rc genhtml_branch_coverage=1 00:16:27.758 --rc genhtml_function_coverage=1 00:16:27.758 --rc genhtml_legend=1 00:16:27.758 --rc geninfo_all_blocks=1 00:16:27.758 --rc geninfo_unexecuted_blocks=1 00:16:27.758 00:16:27.758 ' 00:16:27.758 18:05:56 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:27.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.758 --rc genhtml_branch_coverage=1 00:16:27.758 --rc genhtml_function_coverage=1 00:16:27.758 --rc genhtml_legend=1 00:16:27.758 --rc geninfo_all_blocks=1 00:16:27.758 --rc geninfo_unexecuted_blocks=1 00:16:27.758 00:16:27.758 ' 00:16:27.758 18:05:56 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:27.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.758 --rc genhtml_branch_coverage=1 00:16:27.758 --rc genhtml_function_coverage=1 00:16:27.758 --rc genhtml_legend=1 00:16:27.758 --rc geninfo_all_blocks=1 00:16:27.758 --rc geninfo_unexecuted_blocks=1 00:16:27.758 00:16:27.758 ' 00:16:27.758 18:05:56 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:27.758 18:05:56 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:27.758 18:05:56 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:27.758 18:05:56 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:27.758 18:05:56 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:27.758 18:05:56 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:27.758 18:05:56 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:27.758 18:05:56 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:27.758 18:05:56 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:27.758 18:05:56 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:27.758 18:05:56 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:27.758 18:05:56 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:27.758 18:05:56 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:27.758 18:05:56 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:27.758 18:05:56 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:27.758 18:05:56 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:27.758 18:05:56 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:27.758 18:05:56 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:27.758 18:05:56 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:27.758 18:05:56 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:27.758 18:05:56 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:27.758 18:05:56 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:27.758 18:05:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:27.758 ************************************ 00:16:27.758 START TEST test_save_ublk_config 00:16:27.758 ************************************ 00:16:27.758 18:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:16:27.758 18:05:56 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:27.758 18:05:56 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:27.758 18:05:56 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72141 00:16:27.758 18:05:56 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:27.758 18:05:56 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72141 00:16:27.758 18:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 72141 ']' 00:16:27.758 18:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.758 18:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:27.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.758 18:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.758 18:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:27.758 18:05:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:27.758 [2024-11-05 18:05:57.081248] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:16:27.758 [2024-11-05 18:05:57.081388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72141 ] 00:16:28.018 [2024-11-05 18:05:57.262880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.277 [2024-11-05 18:05:57.367506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.214 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:29.214 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:16:29.214 18:05:58 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:29.214 18:05:58 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:29.214 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.214 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:29.214 [2024-11-05 18:05:58.230440] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:29.214 [2024-11-05 18:05:58.231634] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:29.214 malloc0 00:16:29.214 [2024-11-05 18:05:58.310558] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:29.214 [2024-11-05 18:05:58.310646] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:29.214 [2024-11-05 18:05:58.310660] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:29.214 [2024-11-05 18:05:58.310669] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:29.214 [2024-11-05 18:05:58.319540] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:29.214 [2024-11-05 18:05:58.319566] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:29.214 [2024-11-05 18:05:58.326444] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:29.214 [2024-11-05 18:05:58.326545] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:29.214 [2024-11-05 18:05:58.343453] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:29.214 0 00:16:29.214 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.214 18:05:58 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:29.214 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.214 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:29.474 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.474 18:05:58 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:29.474 "subsystems": [ 00:16:29.474 { 00:16:29.474 "subsystem": "fsdev", 00:16:29.474 "config": [ 00:16:29.474 { 00:16:29.474 "method": "fsdev_set_opts", 00:16:29.474 "params": { 00:16:29.474 "fsdev_io_pool_size": 65535, 00:16:29.474 "fsdev_io_cache_size": 256 00:16:29.474 } 00:16:29.474 } 00:16:29.474 ] 00:16:29.474 }, 00:16:29.474 { 00:16:29.474 "subsystem": "keyring", 00:16:29.474 "config": [] 00:16:29.474 }, 00:16:29.474 { 00:16:29.474 "subsystem": "iobuf", 00:16:29.474 "config": [ 00:16:29.474 { 00:16:29.474 "method": "iobuf_set_options", 00:16:29.474 "params": { 00:16:29.474 "small_pool_count": 8192, 00:16:29.474 "large_pool_count": 1024, 00:16:29.474 "small_bufsize": 8192, 00:16:29.474 "large_bufsize": 135168, 00:16:29.474 "enable_numa": false 00:16:29.474 } 00:16:29.474 } 00:16:29.474 ] 00:16:29.474 }, 00:16:29.474 { 00:16:29.474 "subsystem": "sock", 00:16:29.474 "config": [ 00:16:29.474 { 00:16:29.474 "method": "sock_set_default_impl", 00:16:29.474 "params": { 00:16:29.474 "impl_name": "posix" 00:16:29.474 } 00:16:29.474 }, 00:16:29.474 { 00:16:29.474 "method": "sock_impl_set_options", 00:16:29.474 "params": { 00:16:29.474 "impl_name": "ssl", 00:16:29.474 "recv_buf_size": 4096, 00:16:29.474 "send_buf_size": 4096, 00:16:29.474 "enable_recv_pipe": true, 00:16:29.474 "enable_quickack": false, 00:16:29.474 "enable_placement_id": 0, 00:16:29.474 "enable_zerocopy_send_server": true, 00:16:29.474 "enable_zerocopy_send_client": false, 00:16:29.474 "zerocopy_threshold": 0, 00:16:29.474 "tls_version": 0, 00:16:29.474 "enable_ktls": false 00:16:29.474 } 00:16:29.474 }, 00:16:29.474 { 00:16:29.474 "method": "sock_impl_set_options", 00:16:29.474 "params": { 00:16:29.474 "impl_name": "posix", 00:16:29.474 "recv_buf_size": 2097152, 00:16:29.474 "send_buf_size": 2097152, 00:16:29.474 "enable_recv_pipe": true, 00:16:29.474 "enable_quickack": false, 00:16:29.474 "enable_placement_id": 0, 00:16:29.474 "enable_zerocopy_send_server": true, 00:16:29.474 "enable_zerocopy_send_client": false, 00:16:29.474 "zerocopy_threshold": 0, 00:16:29.474 "tls_version": 0, 00:16:29.474 "enable_ktls": false 00:16:29.474 } 00:16:29.474 } 00:16:29.474 ] 00:16:29.474 }, 00:16:29.474 { 00:16:29.474 "subsystem": "vmd", 00:16:29.474 "config": [] 00:16:29.474 }, 00:16:29.474 { 00:16:29.474 "subsystem": "accel", 00:16:29.474 "config": [ 00:16:29.474 { 00:16:29.474 "method": "accel_set_options", 00:16:29.474 "params": { 00:16:29.474 "small_cache_size": 128, 00:16:29.474 "large_cache_size": 16, 00:16:29.474 "task_count": 2048, 00:16:29.474 "sequence_count": 2048, 00:16:29.474 "buf_count": 2048 00:16:29.474 } 00:16:29.474 } 00:16:29.474 ] 00:16:29.474 }, 00:16:29.474 { 00:16:29.474 "subsystem": "bdev", 00:16:29.474 "config": [ 00:16:29.474 { 00:16:29.474 "method": "bdev_set_options", 00:16:29.474 "params": { 00:16:29.474 "bdev_io_pool_size": 65535, 00:16:29.474 "bdev_io_cache_size": 256, 00:16:29.474 "bdev_auto_examine": true, 00:16:29.474 "iobuf_small_cache_size": 128, 00:16:29.474 "iobuf_large_cache_size": 16 00:16:29.474 } 00:16:29.474 }, 00:16:29.474 { 00:16:29.474 "method": "bdev_raid_set_options", 00:16:29.474 "params": { 00:16:29.474 "process_window_size_kb": 1024, 00:16:29.474 "process_max_bandwidth_mb_sec": 0 00:16:29.474 } 00:16:29.474 }, 00:16:29.474 { 00:16:29.474 "method": "bdev_iscsi_set_options", 00:16:29.474 "params": { 00:16:29.474 "timeout_sec": 30 00:16:29.474 } 00:16:29.474 }, 00:16:29.474 { 00:16:29.474 "method": "bdev_nvme_set_options", 00:16:29.474 "params": { 00:16:29.474 "action_on_timeout": "none", 00:16:29.474 "timeout_us": 0, 00:16:29.474 "timeout_admin_us": 0, 00:16:29.474 "keep_alive_timeout_ms": 10000, 00:16:29.474 "arbitration_burst": 0, 00:16:29.474 "low_priority_weight": 0, 00:16:29.474 "medium_priority_weight": 0, 00:16:29.474 "high_priority_weight": 0, 00:16:29.475 "nvme_adminq_poll_period_us": 10000, 00:16:29.475 "nvme_ioq_poll_period_us": 0, 00:16:29.475 "io_queue_requests": 0, 00:16:29.475 "delay_cmd_submit": true, 00:16:29.475 "transport_retry_count": 4, 00:16:29.475 "bdev_retry_count": 3, 00:16:29.475 "transport_ack_timeout": 0, 00:16:29.475 "ctrlr_loss_timeout_sec": 0, 00:16:29.475 "reconnect_delay_sec": 0, 00:16:29.475 "fast_io_fail_timeout_sec": 0, 00:16:29.475 "disable_auto_failback": false, 00:16:29.475 "generate_uuids": false, 00:16:29.475 "transport_tos": 0, 00:16:29.475 "nvme_error_stat": false, 00:16:29.475 "rdma_srq_size": 0, 00:16:29.475 "io_path_stat": false, 00:16:29.475 "allow_accel_sequence": false, 00:16:29.475 "rdma_max_cq_size": 0, 00:16:29.475 "rdma_cm_event_timeout_ms": 0, 00:16:29.475 "dhchap_digests": [ 00:16:29.475 "sha256", 00:16:29.475 "sha384", 00:16:29.475 "sha512" 00:16:29.475 ], 00:16:29.475 "dhchap_dhgroups": [ 00:16:29.475 "null", 00:16:29.475 "ffdhe2048", 00:16:29.475 "ffdhe3072", 00:16:29.475 "ffdhe4096", 00:16:29.475 "ffdhe6144", 00:16:29.475 "ffdhe8192" 00:16:29.475 ] 00:16:29.475 } 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "method": "bdev_nvme_set_hotplug", 00:16:29.475 "params": { 00:16:29.475 "period_us": 100000, 00:16:29.475 "enable": false 00:16:29.475 } 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "method": "bdev_malloc_create", 00:16:29.475 "params": { 00:16:29.475 "name": "malloc0", 00:16:29.475 "num_blocks": 8192, 00:16:29.475 "block_size": 4096, 00:16:29.475 "physical_block_size": 4096, 00:16:29.475 "uuid": "300b7830-58c3-4104-a5b8-212a061fbed7", 00:16:29.475 "optimal_io_boundary": 0, 00:16:29.475 "md_size": 0, 00:16:29.475 "dif_type": 0, 00:16:29.475 "dif_is_head_of_md": false, 00:16:29.475 "dif_pi_format": 0 00:16:29.475 } 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "method": "bdev_wait_for_examine" 00:16:29.475 } 00:16:29.475 ] 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "subsystem": "scsi", 00:16:29.475 "config": null 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "subsystem": "scheduler", 00:16:29.475 "config": [ 00:16:29.475 { 00:16:29.475 "method": "framework_set_scheduler", 00:16:29.475 "params": { 00:16:29.475 "name": "static" 00:16:29.475 } 00:16:29.475 } 00:16:29.475 ] 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "subsystem": "vhost_scsi", 00:16:29.475 "config": [] 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "subsystem": "vhost_blk", 00:16:29.475 "config": [] 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "subsystem": "ublk", 00:16:29.475 "config": [ 00:16:29.475 { 00:16:29.475 "method": "ublk_create_target", 00:16:29.475 "params": { 00:16:29.475 "cpumask": "1" 00:16:29.475 } 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "method": "ublk_start_disk", 00:16:29.475 "params": { 00:16:29.475 "bdev_name": "malloc0", 00:16:29.475 "ublk_id": 0, 00:16:29.475 "num_queues": 1, 00:16:29.475 "queue_depth": 128 00:16:29.475 } 00:16:29.475 } 00:16:29.475 ] 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "subsystem": "nbd", 00:16:29.475 "config": [] 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "subsystem": "nvmf", 00:16:29.475 "config": [ 00:16:29.475 { 00:16:29.475 "method": "nvmf_set_config", 00:16:29.475 "params": { 00:16:29.475 "discovery_filter": "match_any", 00:16:29.475 "admin_cmd_passthru": { 00:16:29.475 "identify_ctrlr": false 00:16:29.475 }, 00:16:29.475 "dhchap_digests": [ 00:16:29.475 "sha256", 00:16:29.475 "sha384", 00:16:29.475 "sha512" 00:16:29.475 ], 00:16:29.475 "dhchap_dhgroups": [ 00:16:29.475 "null", 00:16:29.475 "ffdhe2048", 00:16:29.475 "ffdhe3072", 00:16:29.475 "ffdhe4096", 00:16:29.475 "ffdhe6144", 00:16:29.475 "ffdhe8192" 00:16:29.475 ] 00:16:29.475 } 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "method": "nvmf_set_max_subsystems", 00:16:29.475 "params": { 00:16:29.475 "max_subsystems": 1024 00:16:29.475 } 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "method": "nvmf_set_crdt", 00:16:29.475 "params": { 00:16:29.475 "crdt1": 0, 00:16:29.475 "crdt2": 0, 00:16:29.475 "crdt3": 0 00:16:29.475 } 00:16:29.475 } 00:16:29.475 ] 00:16:29.475 }, 00:16:29.475 { 00:16:29.475 "subsystem": "iscsi", 00:16:29.475 "config": [ 00:16:29.475 { 00:16:29.475 "method": "iscsi_set_options", 00:16:29.475 "params": { 00:16:29.475 "node_base": "iqn.2016-06.io.spdk", 00:16:29.475 "max_sessions": 128, 00:16:29.475 "max_connections_per_session": 2, 00:16:29.475 "max_queue_depth": 64, 00:16:29.475 "default_time2wait": 2, 00:16:29.475 "default_time2retain": 20, 00:16:29.475 "first_burst_length": 8192, 00:16:29.475 "immediate_data": true, 00:16:29.475 "allow_duplicated_isid": false, 00:16:29.475 "error_recovery_level": 0, 00:16:29.475 "nop_timeout": 60, 00:16:29.475 "nop_in_interval": 30, 00:16:29.475 "disable_chap": false, 00:16:29.475 "require_chap": false, 00:16:29.475 "mutual_chap": false, 00:16:29.475 "chap_group": 0, 00:16:29.475 "max_large_datain_per_connection": 64, 00:16:29.475 "max_r2t_per_connection": 4, 00:16:29.475 "pdu_pool_size": 36864, 00:16:29.475 "immediate_data_pool_size": 16384, 00:16:29.475 "data_out_pool_size": 2048 00:16:29.475 } 00:16:29.475 } 00:16:29.475 ] 00:16:29.475 } 00:16:29.475 ] 00:16:29.475 }' 00:16:29.475 18:05:58 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72141 00:16:29.475 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 72141 ']' 00:16:29.475 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 72141 00:16:29.475 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:16:29.475 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:29.475 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72141 00:16:29.475 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:29.475 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:29.475 killing process with pid 72141 00:16:29.475 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72141' 00:16:29.475 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 72141 00:16:29.475 18:05:58 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 72141 00:16:30.853 [2024-11-05 18:06:00.080240] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:30.853 [2024-11-05 18:06:00.119459] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:30.853 [2024-11-05 18:06:00.119604] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:30.853 [2024-11-05 18:06:00.128432] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:30.853 [2024-11-05 18:06:00.128494] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:30.853 [2024-11-05 18:06:00.128510] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:30.853 [2024-11-05 18:06:00.128534] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:30.853 [2024-11-05 18:06:00.128673] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:32.760 18:06:01 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72207 00:16:32.760 18:06:01 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72207 00:16:32.760 18:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 72207 ']' 00:16:32.760 18:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.760 18:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:32.760 18:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.760 18:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:32.760 18:06:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:32.760 18:06:01 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:32.760 "subsystems": [ 00:16:32.760 { 00:16:32.760 "subsystem": "fsdev", 00:16:32.760 "config": [ 00:16:32.760 { 00:16:32.760 "method": "fsdev_set_opts", 00:16:32.760 "params": { 00:16:32.760 "fsdev_io_pool_size": 65535, 00:16:32.760 "fsdev_io_cache_size": 256 00:16:32.760 } 00:16:32.760 } 00:16:32.760 ] 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "subsystem": "keyring", 00:16:32.760 "config": [] 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "subsystem": "iobuf", 00:16:32.760 "config": [ 00:16:32.760 { 00:16:32.760 "method": "iobuf_set_options", 00:16:32.760 "params": { 00:16:32.760 "small_pool_count": 8192, 00:16:32.760 "large_pool_count": 1024, 00:16:32.760 "small_bufsize": 8192, 00:16:32.760 "large_bufsize": 135168, 00:16:32.760 "enable_numa": false 00:16:32.760 } 00:16:32.760 } 00:16:32.760 ] 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "subsystem": "sock", 00:16:32.760 "config": [ 00:16:32.760 { 00:16:32.760 "method": "sock_set_default_impl", 00:16:32.760 "params": { 00:16:32.760 "impl_name": "posix" 00:16:32.760 } 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "method": "sock_impl_set_options", 00:16:32.760 "params": { 00:16:32.760 "impl_name": "ssl", 00:16:32.760 "recv_buf_size": 4096, 00:16:32.760 "send_buf_size": 4096, 00:16:32.760 "enable_recv_pipe": true, 00:16:32.760 "enable_quickack": false, 00:16:32.760 "enable_placement_id": 0, 00:16:32.760 "enable_zerocopy_send_server": true, 00:16:32.760 "enable_zerocopy_send_client": false, 00:16:32.760 "zerocopy_threshold": 0, 00:16:32.760 "tls_version": 0, 00:16:32.760 "enable_ktls": false 00:16:32.760 } 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "method": "sock_impl_set_options", 00:16:32.760 "params": { 00:16:32.760 "impl_name": "posix", 00:16:32.760 "recv_buf_size": 2097152, 00:16:32.760 "send_buf_size": 2097152, 00:16:32.760 "enable_recv_pipe": true, 00:16:32.760 "enable_quickack": false, 00:16:32.760 "enable_placement_id": 0, 00:16:32.760 "enable_zerocopy_send_server": true, 00:16:32.760 "enable_zerocopy_send_client": false, 00:16:32.760 "zerocopy_threshold": 0, 00:16:32.760 "tls_version": 0, 00:16:32.760 "enable_ktls": false 00:16:32.760 } 00:16:32.760 } 00:16:32.760 ] 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "subsystem": "vmd", 00:16:32.760 "config": [] 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "subsystem": "accel", 00:16:32.760 "config": [ 00:16:32.760 { 00:16:32.760 "method": "accel_set_options", 00:16:32.760 "params": { 00:16:32.760 "small_cache_size": 128, 00:16:32.760 "large_cache_size": 16, 00:16:32.760 "task_count": 2048, 00:16:32.760 "sequence_count": 2048, 00:16:32.760 "buf_count": 2048 00:16:32.760 } 00:16:32.760 } 00:16:32.760 ] 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "subsystem": "bdev", 00:16:32.760 "config": [ 00:16:32.760 { 00:16:32.760 "method": "bdev_set_options", 00:16:32.760 "params": { 00:16:32.760 "bdev_io_pool_size": 65535, 00:16:32.760 "bdev_io_cache_size": 256, 00:16:32.760 "bdev_auto_examine": true, 00:16:32.760 "iobuf_small_cache_size": 128, 00:16:32.760 "iobuf_large_cache_size": 16 00:16:32.760 } 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "method": "bdev_raid_set_options", 00:16:32.760 "params": { 00:16:32.760 "process_window_size_kb": 1024, 00:16:32.760 "process_max_bandwidth_mb_sec": 0 00:16:32.760 } 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "method": "bdev_iscsi_set_options", 00:16:32.760 "params": { 00:16:32.760 "timeout_sec": 30 00:16:32.760 } 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "method": "bdev_nvme_set_options", 00:16:32.760 "params": { 00:16:32.760 "action_on_timeout": "none", 00:16:32.760 "timeout_us": 0, 00:16:32.760 "timeout_admin_us": 0, 00:16:32.760 "keep_alive_timeout_ms": 10000, 00:16:32.760 "arbitration_burst": 0, 00:16:32.760 "low_priority_weight": 0, 00:16:32.760 "medium_priority_weight": 0, 00:16:32.760 "high_priority_weight": 0, 00:16:32.760 "nvme_adminq_poll_period_us": 10000, 00:16:32.760 "nvme_ioq_poll_period_us": 0, 00:16:32.760 "io_queue_requests": 0, 00:16:32.760 "delay_cmd_submit": true, 00:16:32.760 "transport_retry_count": 4, 00:16:32.760 "bdev_retry_count": 3, 00:16:32.760 "transport_ack_timeout": 0, 00:16:32.760 "ctrlr_loss_timeout_sec": 0, 00:16:32.760 "reconnect_delay_sec": 0, 00:16:32.760 "fast_io_fail_timeout_sec": 0, 00:16:32.760 "disable_auto_failback": false, 00:16:32.760 "generate_uuids": false, 00:16:32.760 "transport_tos": 0, 00:16:32.760 "nvme_error_stat": false, 00:16:32.760 "rdma_srq_size": 0, 00:16:32.760 "io_path_stat": false, 00:16:32.760 "allow_accel_sequence": false, 00:16:32.760 "rdma_max_cq_size": 0, 00:16:32.760 "rdma_cm_event_timeout_ms": 0, 00:16:32.760 "dhchap_digests": [ 00:16:32.760 "sha256", 00:16:32.760 "sha384", 00:16:32.760 "sha512" 00:16:32.760 ], 00:16:32.760 "dhchap_dhgroups": [ 00:16:32.760 "null", 00:16:32.760 "ffdhe2048", 00:16:32.760 "ffdhe3072", 00:16:32.760 "ffdhe4096", 00:16:32.760 "ffdhe6144", 00:16:32.760 "ffdhe8192" 00:16:32.760 ] 00:16:32.760 } 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "method": "bdev_nvme_set_hotplug", 00:16:32.760 "params": { 00:16:32.760 "period_us": 100000, 00:16:32.760 "enable": false 00:16:32.760 } 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "method": "bdev_malloc_create", 00:16:32.760 "params": { 00:16:32.760 "name": "malloc0", 00:16:32.760 "num_blocks": 8192, 00:16:32.760 "block_size": 4096, 00:16:32.760 "physical_block_size": 4096, 00:16:32.760 "uuid": "300b7830-58c3-4104-a5b8-212a061fbed7", 00:16:32.760 "optimal_io_boundary": 0, 00:16:32.760 "md_size": 0, 00:16:32.760 "dif_type": 0, 00:16:32.760 "dif_is_head_of_md": false, 00:16:32.760 "dif_pi_format": 0 00:16:32.760 } 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "method": "bdev_wait_for_examine" 00:16:32.760 } 00:16:32.760 ] 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "subsystem": "scsi", 00:16:32.760 "config": null 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "subsystem": "scheduler", 00:16:32.760 "config": [ 00:16:32.760 { 00:16:32.760 "method": "framework_set_scheduler", 00:16:32.760 "params": { 00:16:32.760 "name": "static" 00:16:32.760 } 00:16:32.760 } 00:16:32.760 ] 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "subsystem": "vhost_scsi", 00:16:32.760 "config": [] 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "subsystem": "vhost_blk", 00:16:32.760 "config": [] 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "subsystem": "ublk", 00:16:32.760 "config": [ 00:16:32.760 { 00:16:32.760 "method": "ublk_create_target", 00:16:32.760 "params": { 00:16:32.760 "cpumask": "1" 00:16:32.760 } 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "method": "ublk_start_disk", 00:16:32.760 "params": { 00:16:32.760 "bdev_name": "malloc0", 00:16:32.760 "ublk_id": 0, 00:16:32.760 "num_queues": 1, 00:16:32.760 "queue_depth": 128 00:16:32.760 } 00:16:32.760 } 00:16:32.760 ] 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "subsystem": "nbd", 00:16:32.760 "config": [] 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "subsystem": "nvmf", 00:16:32.760 "config": [ 00:16:32.760 { 00:16:32.760 "method": "nvmf_set_config", 00:16:32.760 "params": { 00:16:32.760 "discovery_filter": "match_any", 00:16:32.760 "admin_cmd_passthru": { 00:16:32.760 "identify_ctrlr": false 00:16:32.760 }, 00:16:32.760 "dhchap_digests": [ 00:16:32.760 "sha256", 00:16:32.760 "sha384", 00:16:32.760 "sha512" 00:16:32.760 ], 00:16:32.760 "dhchap_dhgroups": [ 00:16:32.760 "null", 00:16:32.760 "ffdhe2048", 00:16:32.760 "ffdhe3072", 00:16:32.760 "ffdhe4096", 00:16:32.760 "ffdhe6144", 00:16:32.760 "ffdhe8192" 00:16:32.760 ] 00:16:32.760 } 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "method": "nvmf_set_max_subsystems", 00:16:32.760 "params": { 00:16:32.760 "max_subsystems": 1024 00:16:32.760 } 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "method": "nvmf_set_crdt", 00:16:32.760 "params": { 00:16:32.760 "crdt1": 0, 00:16:32.760 "crdt2": 0, 00:16:32.760 "crdt3": 0 00:16:32.760 } 00:16:32.760 } 00:16:32.760 ] 00:16:32.760 }, 00:16:32.760 { 00:16:32.760 "subsystem": "iscsi", 00:16:32.760 "config": [ 00:16:32.760 { 00:16:32.760 "method": "iscsi_set_options", 00:16:32.760 "params": { 00:16:32.760 "node_base": "iqn.2016-06.io.spdk", 00:16:32.760 "max_sessions": 128, 00:16:32.760 "max_connections_per_session": 2, 00:16:32.760 "max_queue_depth": 64, 00:16:32.760 "default_time2wait": 2, 00:16:32.760 "default_time2retain": 20, 00:16:32.760 "first_burst_length": 8192, 00:16:32.760 "immediate_data": true, 00:16:32.760 "allow_duplicated_isid": false, 00:16:32.760 "error_recovery_level": 0, 00:16:32.760 "nop_timeout": 60, 00:16:32.760 "nop_in_interval": 30, 00:16:32.760 "disable_chap": false, 00:16:32.760 "require_chap": false, 00:16:32.760 "mutual_chap": false, 00:16:32.760 "chap_group": 0, 00:16:32.760 "max_large_datain_per_connection": 64, 00:16:32.760 "max_r2t_per_connection": 4, 00:16:32.760 "pdu_pool_size": 36864, 00:16:32.760 "immediate_data_pool_size": 16384, 00:16:32.760 "data_out_pool_size": 2048 00:16:32.760 } 00:16:32.760 } 00:16:32.760 ] 00:16:32.760 } 00:16:32.760 ] 00:16:32.760 }' 00:16:32.760 18:06:01 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:32.760 [2024-11-05 18:06:02.046618] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:16:32.760 [2024-11-05 18:06:02.046735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72207 ] 00:16:33.019 [2024-11-05 18:06:02.226579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.019 [2024-11-05 18:06:02.333539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.398 [2024-11-05 18:06:03.369426] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:34.398 [2024-11-05 18:06:03.370468] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:34.398 [2024-11-05 18:06:03.377556] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:34.398 [2024-11-05 18:06:03.377656] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:34.398 [2024-11-05 18:06:03.377670] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:34.398 [2024-11-05 18:06:03.377678] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:34.398 [2024-11-05 18:06:03.386494] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:34.398 [2024-11-05 18:06:03.386521] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:34.398 [2024-11-05 18:06:03.393438] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:34.398 [2024-11-05 18:06:03.393529] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:34.398 [2024-11-05 18:06:03.410433] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72207 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 72207 ']' 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 72207 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72207 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:34.398 killing process with pid 72207 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72207' 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 72207 00:16:34.398 18:06:03 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 72207 00:16:35.777 [2024-11-05 18:06:05.063874] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:35.777 [2024-11-05 18:06:05.101438] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:35.777 [2024-11-05 18:06:05.101578] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:36.036 [2024-11-05 18:06:05.111430] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:36.036 [2024-11-05 18:06:05.111493] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:36.036 [2024-11-05 18:06:05.111503] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:36.036 [2024-11-05 18:06:05.111527] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:36.036 [2024-11-05 18:06:05.111665] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:37.943 18:06:06 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:37.943 00:16:37.943 real 0m9.909s 00:16:37.943 user 0m7.534s 00:16:37.943 sys 0m3.075s 00:16:37.943 18:06:06 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:37.943 18:06:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:37.943 ************************************ 00:16:37.943 END TEST test_save_ublk_config 00:16:37.943 ************************************ 00:16:37.943 18:06:06 ublk -- ublk/ublk.sh@139 -- # spdk_pid=72298 00:16:37.943 18:06:06 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:37.943 18:06:06 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:37.943 18:06:06 ublk -- ublk/ublk.sh@141 -- # waitforlisten 72298 00:16:37.943 18:06:06 ublk -- common/autotest_common.sh@833 -- # '[' -z 72298 ']' 00:16:37.943 18:06:06 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.943 18:06:06 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:37.943 18:06:06 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.943 18:06:06 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:37.943 18:06:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:37.943 [2024-11-05 18:06:07.057075] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:16:37.943 [2024-11-05 18:06:07.057209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72298 ] 00:16:37.943 [2024-11-05 18:06:07.238866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:38.202 [2024-11-05 18:06:07.341005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.202 [2024-11-05 18:06:07.341037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.139 18:06:08 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:39.139 18:06:08 ublk -- common/autotest_common.sh@866 -- # return 0 00:16:39.139 18:06:08 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:39.139 18:06:08 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:39.139 18:06:08 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:39.139 18:06:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.139 ************************************ 00:16:39.139 START TEST test_create_ublk 00:16:39.139 ************************************ 00:16:39.139 18:06:08 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:16:39.139 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:39.139 18:06:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.139 18:06:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.139 [2024-11-05 18:06:08.215431] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:39.139 [2024-11-05 18:06:08.218216] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:39.139 18:06:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.139 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:39.139 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:39.139 18:06:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.139 18:06:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.399 18:06:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:39.399 18:06:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.399 18:06:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.399 [2024-11-05 18:06:08.493595] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:39.399 [2024-11-05 18:06:08.494033] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:39.399 [2024-11-05 18:06:08.494053] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:39.399 [2024-11-05 18:06:08.494063] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.399 [2024-11-05 18:06:08.502724] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.399 [2024-11-05 18:06:08.502751] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.399 [2024-11-05 18:06:08.509451] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.399 [2024-11-05 18:06:08.519478] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:39.399 [2024-11-05 18:06:08.530527] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.399 18:06:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:39.399 18:06:08 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.399 18:06:08 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.399 18:06:08 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:39.399 { 00:16:39.399 "ublk_device": "/dev/ublkb0", 00:16:39.399 "id": 0, 00:16:39.399 "queue_depth": 512, 00:16:39.399 "num_queues": 4, 00:16:39.399 "bdev_name": "Malloc0" 00:16:39.399 } 00:16:39.399 ]' 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:39.399 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:39.658 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:39.658 18:06:08 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:39.658 18:06:08 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:39.658 18:06:08 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:39.658 18:06:08 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:39.658 18:06:08 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:39.658 18:06:08 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:39.658 18:06:08 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:39.658 18:06:08 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:39.658 18:06:08 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:39.658 18:06:08 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:39.658 18:06:08 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:39.658 18:06:08 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:39.658 fio: verification read phase will never start because write phase uses all of runtime 00:16:39.658 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:39.658 fio-3.35 00:16:39.658 Starting 1 process 00:16:51.884 00:16:51.884 fio_test: (groupid=0, jobs=1): err= 0: pid=72350: Tue Nov 5 18:06:18 2024 00:16:51.884 write: IOPS=12.3k, BW=48.1MiB/s (50.5MB/s)(481MiB/10001msec); 0 zone resets 00:16:51.884 clat (usec): min=41, max=8007, avg=80.32, stdev=143.20 00:16:51.884 lat (usec): min=42, max=8013, avg=80.77, stdev=143.25 00:16:51.884 clat percentiles (usec): 00:16:51.884 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 60], 00:16:51.884 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 65], 00:16:51.884 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 94], 95.00th=[ 169], 00:16:51.884 | 99.00th=[ 188], 99.50th=[ 198], 99.90th=[ 3032], 99.95th=[ 3523], 00:16:51.884 | 99.99th=[ 3851] 00:16:51.884 bw ( KiB/s): min=18648, max=60920, per=100.00%, avg=50519.58, stdev=15393.34, samples=19 00:16:51.884 iops : min= 4662, max=15230, avg=12629.89, stdev=3848.34, samples=19 00:16:51.884 lat (usec) : 50=0.01%, 100=90.24%, 250=9.45%, 500=0.01%, 750=0.02% 00:16:51.884 lat (usec) : 1000=0.01% 00:16:51.884 lat (msec) : 2=0.08%, 4=0.18%, 10=0.01% 00:16:51.884 cpu : usr=2.54%, sys=8.62%, ctx=123261, majf=0, minf=797 00:16:51.884 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:51.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.884 issued rwts: total=0,123261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.884 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:51.884 00:16:51.884 Run status group 0 (all jobs): 00:16:51.884 WRITE: bw=48.1MiB/s (50.5MB/s), 48.1MiB/s-48.1MiB/s (50.5MB/s-50.5MB/s), io=481MiB (505MB), run=10001-10001msec 00:16:51.884 00:16:51.884 Disk stats (read/write): 00:16:51.884 ublkb0: ios=0/122190, merge=0/0, ticks=0/8790, in_queue=8790, util=99.05% 00:16:51.884 18:06:18 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:51.884 18:06:18 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.884 18:06:18 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.884 [2024-11-05 18:06:19.003787] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:51.884 [2024-11-05 18:06:19.037866] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:51.884 [2024-11-05 18:06:19.042799] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:51.884 [2024-11-05 18:06:19.050450] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:51.884 [2024-11-05 18:06:19.050723] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:51.884 [2024-11-05 18:06:19.050747] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.884 18:06:19 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.884 [2024-11-05 18:06:19.074522] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:51.884 request: 00:16:51.884 { 00:16:51.884 "ublk_id": 0, 00:16:51.884 "method": "ublk_stop_disk", 00:16:51.884 "req_id": 1 00:16:51.884 } 00:16:51.884 Got JSON-RPC error response 00:16:51.884 response: 00:16:51.884 { 00:16:51.884 "code": -19, 00:16:51.884 "message": "No such device" 00:16:51.884 } 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:51.884 18:06:19 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.884 [2024-11-05 18:06:19.090509] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:51.884 [2024-11-05 18:06:19.097437] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:51.884 [2024-11-05 18:06:19.097483] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.884 18:06:19 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.884 18:06:19 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:51.884 18:06:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.884 18:06:19 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:51.884 18:06:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:51.884 18:06:19 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:51.884 18:06:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.884 18:06:19 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:51.884 18:06:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:51.884 18:06:19 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:51.884 00:16:51.884 real 0m11.711s 00:16:51.884 user 0m0.622s 00:16:51.884 sys 0m0.999s 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:51.884 18:06:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.884 ************************************ 00:16:51.884 END TEST test_create_ublk 00:16:51.884 ************************************ 00:16:51.884 18:06:19 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:51.884 18:06:19 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:51.884 18:06:19 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:51.884 18:06:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.884 ************************************ 00:16:51.884 START TEST test_create_multi_ublk 00:16:51.884 ************************************ 00:16:51.884 18:06:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:16:51.884 18:06:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:51.884 18:06:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.884 18:06:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.884 [2024-11-05 18:06:19.996425] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:51.884 [2024-11-05 18:06:19.998815] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:51.884 18:06:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.884 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:51.884 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:51.884 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.885 [2024-11-05 18:06:20.281568] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:51.885 [2024-11-05 18:06:20.282040] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:51.885 [2024-11-05 18:06:20.282059] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:51.885 [2024-11-05 18:06:20.282074] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:51.885 [2024-11-05 18:06:20.290731] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:51.885 [2024-11-05 18:06:20.290760] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:51.885 [2024-11-05 18:06:20.297457] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:51.885 [2024-11-05 18:06:20.298028] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:51.885 [2024-11-05 18:06:20.313448] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.885 [2024-11-05 18:06:20.617559] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:51.885 [2024-11-05 18:06:20.618012] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:51.885 [2024-11-05 18:06:20.618032] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:51.885 [2024-11-05 18:06:20.618041] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:51.885 [2024-11-05 18:06:20.625465] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:51.885 [2024-11-05 18:06:20.625487] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:51.885 [2024-11-05 18:06:20.633445] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:51.885 [2024-11-05 18:06:20.634024] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:51.885 [2024-11-05 18:06:20.657449] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.885 [2024-11-05 18:06:20.940552] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:51.885 [2024-11-05 18:06:20.941022] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:51.885 [2024-11-05 18:06:20.941039] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:51.885 [2024-11-05 18:06:20.941050] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:51.885 [2024-11-05 18:06:20.948475] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:51.885 [2024-11-05 18:06:20.948504] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:51.885 [2024-11-05 18:06:20.956443] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:51.885 [2024-11-05 18:06:20.957033] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:51.885 [2024-11-05 18:06:20.965487] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.885 18:06:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:52.145 [2024-11-05 18:06:21.274574] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:52.145 [2024-11-05 18:06:21.275005] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:52.145 [2024-11-05 18:06:21.275025] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:52.145 [2024-11-05 18:06:21.275034] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:52.145 [2024-11-05 18:06:21.282457] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:52.145 [2024-11-05 18:06:21.282482] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:52.145 [2024-11-05 18:06:21.290438] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:52.145 [2024-11-05 18:06:21.290999] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:52.145 [2024-11-05 18:06:21.307449] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:52.145 { 00:16:52.145 "ublk_device": "/dev/ublkb0", 00:16:52.145 "id": 0, 00:16:52.145 "queue_depth": 512, 00:16:52.145 "num_queues": 4, 00:16:52.145 "bdev_name": "Malloc0" 00:16:52.145 }, 00:16:52.145 { 00:16:52.145 "ublk_device": "/dev/ublkb1", 00:16:52.145 "id": 1, 00:16:52.145 "queue_depth": 512, 00:16:52.145 "num_queues": 4, 00:16:52.145 "bdev_name": "Malloc1" 00:16:52.145 }, 00:16:52.145 { 00:16:52.145 "ublk_device": "/dev/ublkb2", 00:16:52.145 "id": 2, 00:16:52.145 "queue_depth": 512, 00:16:52.145 "num_queues": 4, 00:16:52.145 "bdev_name": "Malloc2" 00:16:52.145 }, 00:16:52.145 { 00:16:52.145 "ublk_device": "/dev/ublkb3", 00:16:52.145 "id": 3, 00:16:52.145 "queue_depth": 512, 00:16:52.145 "num_queues": 4, 00:16:52.145 "bdev_name": "Malloc3" 00:16:52.145 } 00:16:52.145 ]' 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:52.145 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:52.404 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:52.404 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:52.404 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:52.404 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:52.404 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:52.404 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:52.404 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:52.404 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:52.404 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:52.404 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:52.404 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:52.404 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:52.404 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:52.664 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:52.664 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:52.664 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:52.664 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:52.664 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:52.664 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:52.664 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:52.664 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:52.664 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:52.664 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:52.664 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:52.664 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:52.664 18:06:21 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.923 18:06:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.183 [2024-11-05 18:06:22.254565] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:53.183 [2024-11-05 18:06:22.292871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:53.183 [2024-11-05 18:06:22.293893] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:53.183 [2024-11-05 18:06:22.300454] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:53.183 [2024-11-05 18:06:22.300738] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:53.183 [2024-11-05 18:06:22.300758] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.183 [2024-11-05 18:06:22.315514] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:53.183 [2024-11-05 18:06:22.345841] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:53.183 [2024-11-05 18:06:22.346883] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:53.183 [2024-11-05 18:06:22.355456] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:53.183 [2024-11-05 18:06:22.355702] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:53.183 [2024-11-05 18:06:22.355721] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.183 [2024-11-05 18:06:22.371540] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:53.183 [2024-11-05 18:06:22.411447] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:53.183 [2024-11-05 18:06:22.412244] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:53.183 [2024-11-05 18:06:22.419452] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:53.183 [2024-11-05 18:06:22.419739] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:53.183 [2024-11-05 18:06:22.419758] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:53.183 [2024-11-05 18:06:22.427518] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:53.183 [2024-11-05 18:06:22.460472] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:53.183 [2024-11-05 18:06:22.461202] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:53.183 [2024-11-05 18:06:22.466427] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:53.183 [2024-11-05 18:06:22.466717] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:53.183 [2024-11-05 18:06:22.466736] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:53.183 18:06:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.184 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:53.445 [2024-11-05 18:06:22.661494] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:53.445 [2024-11-05 18:06:22.668425] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:53.445 [2024-11-05 18:06:22.668460] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:53.445 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:53.445 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:53.445 18:06:22 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:53.445 18:06:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.445 18:06:22 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.404 18:06:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.404 18:06:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:54.404 18:06:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:54.404 18:06:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.404 18:06:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.664 18:06:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.664 18:06:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:54.664 18:06:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:54.664 18:06:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.664 18:06:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:54.923 18:06:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.923 18:06:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:54.923 18:06:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:54.923 18:06:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.923 18:06:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.182 18:06:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.182 18:06:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:55.182 18:06:24 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:55.182 18:06:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.182 18:06:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.182 18:06:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.182 18:06:24 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:55.182 18:06:24 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:55.441 18:06:24 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:55.441 18:06:24 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:55.441 18:06:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.441 18:06:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.441 18:06:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.441 18:06:24 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:55.441 18:06:24 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:55.441 18:06:24 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:55.441 00:16:55.441 real 0m4.624s 00:16:55.441 user 0m1.069s 00:16:55.441 sys 0m0.223s 00:16:55.441 18:06:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:55.441 18:06:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:55.441 ************************************ 00:16:55.441 END TEST test_create_multi_ublk 00:16:55.441 ************************************ 00:16:55.441 18:06:24 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:55.441 18:06:24 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:55.441 18:06:24 ublk -- ublk/ublk.sh@130 -- # killprocess 72298 00:16:55.441 18:06:24 ublk -- common/autotest_common.sh@952 -- # '[' -z 72298 ']' 00:16:55.441 18:06:24 ublk -- common/autotest_common.sh@956 -- # kill -0 72298 00:16:55.441 18:06:24 ublk -- common/autotest_common.sh@957 -- # uname 00:16:55.441 18:06:24 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:55.441 18:06:24 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72298 00:16:55.441 18:06:24 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:55.441 18:06:24 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:55.441 killing process with pid 72298 00:16:55.441 18:06:24 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72298' 00:16:55.441 18:06:24 ublk -- common/autotest_common.sh@971 -- # kill 72298 00:16:55.441 18:06:24 ublk -- common/autotest_common.sh@976 -- # wait 72298 00:16:56.822 [2024-11-05 18:06:25.812225] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:56.822 [2024-11-05 18:06:25.812277] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:57.760 00:16:57.760 real 0m30.269s 00:16:57.760 user 0m43.780s 00:16:57.760 sys 0m9.797s 00:16:57.760 18:06:26 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:57.760 18:06:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:57.760 ************************************ 00:16:57.760 END TEST ublk 00:16:57.760 ************************************ 00:16:57.760 18:06:27 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:57.760 18:06:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:57.760 18:06:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:57.760 18:06:27 -- common/autotest_common.sh@10 -- # set +x 00:16:57.760 ************************************ 00:16:57.760 START TEST ublk_recovery 00:16:57.760 ************************************ 00:16:57.760 18:06:27 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:58.019 * Looking for test storage... 00:16:58.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:58.019 18:06:27 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:58.019 18:06:27 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:16:58.019 18:06:27 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:58.019 18:06:27 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:58.019 18:06:27 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:58.019 18:06:27 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:58.019 18:06:27 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:58.019 18:06:27 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.019 18:06:27 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:58.019 18:06:27 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:58.019 18:06:27 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:58.019 18:06:27 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:58.019 18:06:27 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:58.019 18:06:27 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:58.019 18:06:27 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:58.020 18:06:27 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:58.020 18:06:27 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.020 18:06:27 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:58.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.020 --rc genhtml_branch_coverage=1 00:16:58.020 --rc genhtml_function_coverage=1 00:16:58.020 --rc genhtml_legend=1 00:16:58.020 --rc geninfo_all_blocks=1 00:16:58.020 --rc geninfo_unexecuted_blocks=1 00:16:58.020 00:16:58.020 ' 00:16:58.020 18:06:27 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:58.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.020 --rc genhtml_branch_coverage=1 00:16:58.020 --rc genhtml_function_coverage=1 00:16:58.020 --rc genhtml_legend=1 00:16:58.020 --rc geninfo_all_blocks=1 00:16:58.020 --rc geninfo_unexecuted_blocks=1 00:16:58.020 00:16:58.020 ' 00:16:58.020 18:06:27 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:58.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.020 --rc genhtml_branch_coverage=1 00:16:58.020 --rc genhtml_function_coverage=1 00:16:58.020 --rc genhtml_legend=1 00:16:58.020 --rc geninfo_all_blocks=1 00:16:58.020 --rc geninfo_unexecuted_blocks=1 00:16:58.020 00:16:58.020 ' 00:16:58.020 18:06:27 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:58.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.020 --rc genhtml_branch_coverage=1 00:16:58.020 --rc genhtml_function_coverage=1 00:16:58.020 --rc genhtml_legend=1 00:16:58.020 --rc geninfo_all_blocks=1 00:16:58.020 --rc geninfo_unexecuted_blocks=1 00:16:58.020 00:16:58.020 ' 00:16:58.020 18:06:27 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:58.020 18:06:27 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:58.020 18:06:27 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:58.020 18:06:27 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:58.020 18:06:27 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:58.020 18:06:27 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:58.020 18:06:27 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:58.020 18:06:27 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:58.020 18:06:27 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:58.020 18:06:27 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:58.020 18:06:27 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=72725 00:16:58.020 18:06:27 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:58.020 18:06:27 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:58.020 18:06:27 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 72725 00:16:58.020 18:06:27 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 72725 ']' 00:16:58.020 18:06:27 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.020 18:06:27 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:58.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.020 18:06:27 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.020 18:06:27 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:58.020 18:06:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:58.279 [2024-11-05 18:06:27.418383] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:16:58.279 [2024-11-05 18:06:27.418535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72725 ] 00:16:58.279 [2024-11-05 18:06:27.599520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:58.539 [2024-11-05 18:06:27.714216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.539 [2024-11-05 18:06:27.714250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.476 18:06:28 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:59.476 18:06:28 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:16:59.476 18:06:28 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:59.476 18:06:28 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.476 18:06:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.476 [2024-11-05 18:06:28.564429] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:59.476 [2024-11-05 18:06:28.566853] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:59.476 18:06:28 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.476 18:06:28 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:59.476 18:06:28 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.476 18:06:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.476 malloc0 00:16:59.476 18:06:28 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.476 18:06:28 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:59.476 18:06:28 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.476 18:06:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:59.476 [2024-11-05 18:06:28.704581] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:59.476 [2024-11-05 18:06:28.704699] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:59.476 [2024-11-05 18:06:28.704714] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:59.476 [2024-11-05 18:06:28.704726] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:59.476 [2024-11-05 18:06:28.713541] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:59.476 [2024-11-05 18:06:28.713565] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:59.476 [2024-11-05 18:06:28.720442] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:59.476 [2024-11-05 18:06:28.720580] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:59.476 [2024-11-05 18:06:28.742452] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:59.476 1 00:16:59.476 18:06:28 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.476 18:06:28 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:00.859 18:06:29 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=72760 00:17:00.859 18:06:29 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:00.859 18:06:29 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:00.859 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:00.859 fio-3.35 00:17:00.859 Starting 1 process 00:17:06.289 18:06:34 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 72725 00:17:06.289 18:06:34 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:10.489 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 72725 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:10.489 18:06:39 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=72871 00:17:10.489 18:06:39 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:10.489 18:06:39 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.489 18:06:39 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 72871 00:17:10.489 18:06:39 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 72871 ']' 00:17:10.489 18:06:39 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.489 18:06:39 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:10.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.489 18:06:39 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.489 18:06:39 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:10.489 18:06:39 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:10.747 [2024-11-05 18:06:39.874764] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:17:10.747 [2024-11-05 18:06:39.875411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72871 ] 00:17:10.747 [2024-11-05 18:06:40.054868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:11.005 [2024-11-05 18:06:40.165139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.005 [2024-11-05 18:06:40.165173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.942 18:06:41 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:11.942 18:06:41 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:17:11.942 18:06:41 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:11.942 18:06:41 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.942 18:06:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.942 [2024-11-05 18:06:41.043430] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:11.942 [2024-11-05 18:06:41.046108] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:11.942 18:06:41 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.942 18:06:41 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:11.942 18:06:41 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.942 18:06:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.942 malloc0 00:17:11.942 18:06:41 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.942 18:06:41 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:11.942 18:06:41 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.942 18:06:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:11.942 [2024-11-05 18:06:41.201591] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:11.942 [2024-11-05 18:06:41.201652] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:11.942 [2024-11-05 18:06:41.201664] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:11.942 [2024-11-05 18:06:41.209473] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:11.942 [2024-11-05 18:06:41.209503] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:17:11.942 [2024-11-05 18:06:41.209513] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:11.942 [2024-11-05 18:06:41.209612] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:11.942 1 00:17:11.942 18:06:41 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.942 18:06:41 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 72760 00:17:11.942 [2024-11-05 18:06:41.217433] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:11.942 [2024-11-05 18:06:41.221389] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:11.942 [2024-11-05 18:06:41.231622] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:11.942 [2024-11-05 18:06:41.231650] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:08.178 00:18:08.178 fio_test: (groupid=0, jobs=1): err= 0: pid=72763: Tue Nov 5 18:07:30 2024 00:18:08.178 read: IOPS=22.4k, BW=87.5MiB/s (91.7MB/s)(5249MiB/60002msec) 00:18:08.178 slat (nsec): min=1890, max=282477, avg=7073.83, stdev=2099.51 00:18:08.178 clat (usec): min=1032, max=6479.5k, avg=2801.28, stdev=43985.19 00:18:08.178 lat (usec): min=1039, max=6479.5k, avg=2808.36, stdev=43985.19 00:18:08.178 clat percentiles (usec): 00:18:08.179 | 1.00th=[ 1926], 5.00th=[ 2089], 10.00th=[ 2147], 20.00th=[ 2180], 00:18:08.179 | 30.00th=[ 2212], 40.00th=[ 2245], 50.00th=[ 2278], 60.00th=[ 2311], 00:18:08.179 | 70.00th=[ 2376], 80.00th=[ 2802], 90.00th=[ 3097], 95.00th=[ 3720], 00:18:08.179 | 99.00th=[ 5211], 99.50th=[ 5735], 99.90th=[ 7046], 99.95th=[ 7963], 00:18:08.179 | 99.99th=[12780] 00:18:08.179 bw ( KiB/s): min=25980, max=108696, per=100.00%, avg=99578.95, stdev=13570.43, samples=107 00:18:08.179 iops : min= 6495, max=27174, avg=24894.70, stdev=3392.62, samples=107 00:18:08.179 write: IOPS=22.4k, BW=87.4MiB/s (91.6MB/s)(5242MiB/60002msec); 0 zone resets 00:18:08.179 slat (usec): min=2, max=989, avg= 7.14, stdev= 2.53 00:18:08.179 clat (usec): min=1028, max=6479.8k, avg=2902.27, stdev=45414.49 00:18:08.179 lat (usec): min=1035, max=6479.8k, avg=2909.41, stdev=45414.49 00:18:08.179 clat percentiles (usec): 00:18:08.179 | 1.00th=[ 1926], 5.00th=[ 2073], 10.00th=[ 2212], 20.00th=[ 2278], 00:18:08.179 | 30.00th=[ 2343], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2409], 00:18:08.179 | 70.00th=[ 2474], 80.00th=[ 2835], 90.00th=[ 3195], 95.00th=[ 3720], 00:18:08.179 | 99.00th=[ 5211], 99.50th=[ 5800], 99.90th=[ 7177], 99.95th=[ 8094], 00:18:08.179 | 99.99th=[12911] 00:18:08.179 bw ( KiB/s): min=26970, max=108056, per=100.00%, avg=99463.95, stdev=13297.62, samples=107 00:18:08.179 iops : min= 6742, max=27014, avg=24865.96, stdev=3324.43, samples=107 00:18:08.179 lat (msec) : 2=2.39%, 4=93.83%, 10=3.76%, 20=0.02%, >=2000=0.01% 00:18:08.179 cpu : usr=12.28%, sys=31.53%, ctx=115318, majf=0, minf=14 00:18:08.179 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:08.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:08.179 issued rwts: total=1343785,1341977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.179 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:08.179 00:18:08.179 Run status group 0 (all jobs): 00:18:08.179 READ: bw=87.5MiB/s (91.7MB/s), 87.5MiB/s-87.5MiB/s (91.7MB/s-91.7MB/s), io=5249MiB (5504MB), run=60002-60002msec 00:18:08.179 WRITE: bw=87.4MiB/s (91.6MB/s), 87.4MiB/s-87.4MiB/s (91.6MB/s-91.6MB/s), io=5242MiB (5497MB), run=60002-60002msec 00:18:08.179 00:18:08.179 Disk stats (read/write): 00:18:08.179 ublkb1: ios=1341000/1339202, merge=0/0, ticks=3652323/3647351, in_queue=7299675, util=99.93% 00:18:08.179 18:07:30 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:08.179 [2024-11-05 18:07:30.029764] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:08.179 [2024-11-05 18:07:30.059459] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:08.179 [2024-11-05 18:07:30.059653] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:08.179 [2024-11-05 18:07:30.065444] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:08.179 [2024-11-05 18:07:30.065607] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:08.179 [2024-11-05 18:07:30.065633] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.179 18:07:30 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:08.179 [2024-11-05 18:07:30.073533] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:08.179 [2024-11-05 18:07:30.081427] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:08.179 [2024-11-05 18:07:30.081469] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.179 18:07:30 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:08.179 18:07:30 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:08.179 18:07:30 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 72871 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 72871 ']' 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 72871 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72871 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:08.179 killing process with pid 72871 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72871' 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@971 -- # kill 72871 00:18:08.179 18:07:30 ublk_recovery -- common/autotest_common.sh@976 -- # wait 72871 00:18:08.179 [2024-11-05 18:07:31.681862] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:08.179 [2024-11-05 18:07:31.681919] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:08.179 00:18:08.179 real 1m5.972s 00:18:08.179 user 1m50.869s 00:18:08.179 sys 0m36.408s 00:18:08.179 18:07:33 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:08.179 18:07:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:08.179 ************************************ 00:18:08.179 END TEST ublk_recovery 00:18:08.179 ************************************ 00:18:08.179 18:07:33 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:18:08.179 18:07:33 -- spdk/autotest.sh@256 -- # timing_exit lib 00:18:08.179 18:07:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:08.179 18:07:33 -- common/autotest_common.sh@10 -- # set +x 00:18:08.179 18:07:33 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:18:08.179 18:07:33 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:18:08.179 18:07:33 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:18:08.179 18:07:33 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:18:08.179 18:07:33 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:08.179 18:07:33 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:08.179 18:07:33 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:18:08.179 18:07:33 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:18:08.179 18:07:33 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:18:08.179 18:07:33 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:18:08.179 18:07:33 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:08.179 18:07:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:18:08.179 18:07:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:08.179 18:07:33 -- common/autotest_common.sh@10 -- # set +x 00:18:08.179 ************************************ 00:18:08.179 START TEST ftl 00:18:08.179 ************************************ 00:18:08.179 18:07:33 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:08.179 * Looking for test storage... 00:18:08.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:08.179 18:07:33 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:08.179 18:07:33 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:18:08.179 18:07:33 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:08.179 18:07:33 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:08.179 18:07:33 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.179 18:07:33 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.179 18:07:33 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.179 18:07:33 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.179 18:07:33 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.179 18:07:33 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.179 18:07:33 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.179 18:07:33 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.179 18:07:33 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.179 18:07:33 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.179 18:07:33 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.179 18:07:33 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:08.179 18:07:33 ftl -- scripts/common.sh@345 -- # : 1 00:18:08.179 18:07:33 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.179 18:07:33 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.179 18:07:33 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:08.179 18:07:33 ftl -- scripts/common.sh@353 -- # local d=1 00:18:08.179 18:07:33 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.180 18:07:33 ftl -- scripts/common.sh@355 -- # echo 1 00:18:08.180 18:07:33 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.180 18:07:33 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:08.180 18:07:33 ftl -- scripts/common.sh@353 -- # local d=2 00:18:08.180 18:07:33 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.180 18:07:33 ftl -- scripts/common.sh@355 -- # echo 2 00:18:08.180 18:07:33 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.180 18:07:33 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.180 18:07:33 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.180 18:07:33 ftl -- scripts/common.sh@368 -- # return 0 00:18:08.180 18:07:33 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.180 18:07:33 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:08.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.180 --rc genhtml_branch_coverage=1 00:18:08.180 --rc genhtml_function_coverage=1 00:18:08.180 --rc genhtml_legend=1 00:18:08.180 --rc geninfo_all_blocks=1 00:18:08.180 --rc geninfo_unexecuted_blocks=1 00:18:08.180 00:18:08.180 ' 00:18:08.180 18:07:33 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:08.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.180 --rc genhtml_branch_coverage=1 00:18:08.180 --rc genhtml_function_coverage=1 00:18:08.180 --rc genhtml_legend=1 00:18:08.180 --rc geninfo_all_blocks=1 00:18:08.180 --rc geninfo_unexecuted_blocks=1 00:18:08.180 00:18:08.180 ' 00:18:08.180 18:07:33 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:08.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.180 --rc genhtml_branch_coverage=1 00:18:08.180 --rc genhtml_function_coverage=1 00:18:08.180 --rc genhtml_legend=1 00:18:08.180 --rc geninfo_all_blocks=1 00:18:08.180 --rc geninfo_unexecuted_blocks=1 00:18:08.180 00:18:08.180 ' 00:18:08.180 18:07:33 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:08.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.180 --rc genhtml_branch_coverage=1 00:18:08.180 --rc genhtml_function_coverage=1 00:18:08.180 --rc genhtml_legend=1 00:18:08.180 --rc geninfo_all_blocks=1 00:18:08.180 --rc geninfo_unexecuted_blocks=1 00:18:08.180 00:18:08.180 ' 00:18:08.180 18:07:33 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:08.180 18:07:33 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:08.180 18:07:33 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:08.180 18:07:33 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:08.180 18:07:33 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:08.180 18:07:33 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:08.180 18:07:33 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:08.180 18:07:33 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:08.180 18:07:33 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:08.180 18:07:33 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:08.180 18:07:33 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:08.180 18:07:33 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:08.180 18:07:33 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:08.180 18:07:33 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:08.180 18:07:33 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:08.180 18:07:33 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:08.180 18:07:33 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:08.180 18:07:33 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:08.180 18:07:33 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:08.180 18:07:33 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:08.180 18:07:33 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:08.180 18:07:33 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:08.180 18:07:33 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:08.180 18:07:33 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:08.180 18:07:33 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:08.180 18:07:33 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:08.180 18:07:33 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:08.180 18:07:33 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:08.180 18:07:33 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:08.180 18:07:33 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:08.180 18:07:33 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:08.180 18:07:33 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:08.180 18:07:33 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:08.180 18:07:33 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:08.180 18:07:33 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:08.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:08.180 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:08.180 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:08.180 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:08.180 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:08.180 18:07:34 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=73687 00:18:08.180 18:07:34 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:08.180 18:07:34 ftl -- ftl/ftl.sh@38 -- # waitforlisten 73687 00:18:08.180 18:07:34 ftl -- common/autotest_common.sh@833 -- # '[' -z 73687 ']' 00:18:08.180 18:07:34 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.180 18:07:34 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:08.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.180 18:07:34 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.180 18:07:34 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:08.180 18:07:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:08.180 [2024-11-05 18:07:34.380186] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:18:08.180 [2024-11-05 18:07:34.380301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73687 ] 00:18:08.180 [2024-11-05 18:07:34.557664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.180 [2024-11-05 18:07:34.654749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.180 18:07:35 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:08.180 18:07:35 ftl -- common/autotest_common.sh@866 -- # return 0 00:18:08.180 18:07:35 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:08.180 18:07:35 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:08.180 18:07:36 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:08.180 18:07:36 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:08.180 18:07:36 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:08.180 18:07:36 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:08.180 18:07:36 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:08.180 18:07:37 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:08.180 18:07:37 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:08.180 18:07:37 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:08.180 18:07:37 ftl -- ftl/ftl.sh@50 -- # break 00:18:08.180 18:07:37 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:08.180 18:07:37 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:08.180 18:07:37 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:08.180 18:07:37 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:08.180 18:07:37 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:08.180 18:07:37 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:08.180 18:07:37 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:08.180 18:07:37 ftl -- ftl/ftl.sh@63 -- # break 00:18:08.180 18:07:37 ftl -- ftl/ftl.sh@66 -- # killprocess 73687 00:18:08.180 18:07:37 ftl -- common/autotest_common.sh@952 -- # '[' -z 73687 ']' 00:18:08.180 18:07:37 ftl -- common/autotest_common.sh@956 -- # kill -0 73687 00:18:08.180 18:07:37 ftl -- common/autotest_common.sh@957 -- # uname 00:18:08.180 18:07:37 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:08.180 18:07:37 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73687 00:18:08.180 18:07:37 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:08.180 killing process with pid 73687 00:18:08.180 18:07:37 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:08.181 18:07:37 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73687' 00:18:08.181 18:07:37 ftl -- common/autotest_common.sh@971 -- # kill 73687 00:18:08.181 18:07:37 ftl -- common/autotest_common.sh@976 -- # wait 73687 00:18:10.718 18:07:39 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:10.718 18:07:39 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:10.718 18:07:39 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:10.718 18:07:39 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:10.718 18:07:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:10.718 ************************************ 00:18:10.718 START TEST ftl_fio_basic 00:18:10.718 ************************************ 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:10.718 * Looking for test storage... 00:18:10.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:10.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.718 --rc genhtml_branch_coverage=1 00:18:10.718 --rc genhtml_function_coverage=1 00:18:10.718 --rc genhtml_legend=1 00:18:10.718 --rc geninfo_all_blocks=1 00:18:10.718 --rc geninfo_unexecuted_blocks=1 00:18:10.718 00:18:10.718 ' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:10.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.718 --rc genhtml_branch_coverage=1 00:18:10.718 --rc genhtml_function_coverage=1 00:18:10.718 --rc genhtml_legend=1 00:18:10.718 --rc geninfo_all_blocks=1 00:18:10.718 --rc geninfo_unexecuted_blocks=1 00:18:10.718 00:18:10.718 ' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:10.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.718 --rc genhtml_branch_coverage=1 00:18:10.718 --rc genhtml_function_coverage=1 00:18:10.718 --rc genhtml_legend=1 00:18:10.718 --rc geninfo_all_blocks=1 00:18:10.718 --rc geninfo_unexecuted_blocks=1 00:18:10.718 00:18:10.718 ' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:10.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.718 --rc genhtml_branch_coverage=1 00:18:10.718 --rc genhtml_function_coverage=1 00:18:10.718 --rc genhtml_legend=1 00:18:10.718 --rc geninfo_all_blocks=1 00:18:10.718 --rc geninfo_unexecuted_blocks=1 00:18:10.718 00:18:10.718 ' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:10.718 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:10.719 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:10.719 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:10.719 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=73836 00:18:10.719 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 73836 00:18:10.719 18:07:39 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:10.719 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 73836 ']' 00:18:10.719 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.719 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:10.719 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.719 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:10.719 18:07:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:10.719 [2024-11-05 18:07:39.870722] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:18:10.719 [2024-11-05 18:07:39.870846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73836 ] 00:18:10.978 [2024-11-05 18:07:40.050579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:10.978 [2024-11-05 18:07:40.166882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.978 [2024-11-05 18:07:40.170467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.978 [2024-11-05 18:07:40.170469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.916 18:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:11.916 18:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:18:11.916 18:07:41 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:11.916 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:11.916 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:11.916 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:11.916 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:11.916 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:12.176 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:12.176 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:12.176 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:12.176 18:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:18:12.176 18:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:12.176 18:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:12.176 18:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:12.176 18:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:12.435 18:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:12.435 { 00:18:12.435 "name": "nvme0n1", 00:18:12.435 "aliases": [ 00:18:12.435 "eedb752c-2af4-4817-9e48-97f15c673bc2" 00:18:12.435 ], 00:18:12.435 "product_name": "NVMe disk", 00:18:12.435 "block_size": 4096, 00:18:12.435 "num_blocks": 1310720, 00:18:12.435 "uuid": "eedb752c-2af4-4817-9e48-97f15c673bc2", 00:18:12.435 "numa_id": -1, 00:18:12.435 "assigned_rate_limits": { 00:18:12.435 "rw_ios_per_sec": 0, 00:18:12.435 "rw_mbytes_per_sec": 0, 00:18:12.435 "r_mbytes_per_sec": 0, 00:18:12.435 "w_mbytes_per_sec": 0 00:18:12.435 }, 00:18:12.435 "claimed": false, 00:18:12.435 "zoned": false, 00:18:12.435 "supported_io_types": { 00:18:12.435 "read": true, 00:18:12.435 "write": true, 00:18:12.435 "unmap": true, 00:18:12.435 "flush": true, 00:18:12.435 "reset": true, 00:18:12.435 "nvme_admin": true, 00:18:12.435 "nvme_io": true, 00:18:12.435 "nvme_io_md": false, 00:18:12.435 "write_zeroes": true, 00:18:12.435 "zcopy": false, 00:18:12.435 "get_zone_info": false, 00:18:12.435 "zone_management": false, 00:18:12.435 "zone_append": false, 00:18:12.435 "compare": true, 00:18:12.435 "compare_and_write": false, 00:18:12.435 "abort": true, 00:18:12.435 "seek_hole": false, 00:18:12.435 "seek_data": false, 00:18:12.435 "copy": true, 00:18:12.435 "nvme_iov_md": false 00:18:12.435 }, 00:18:12.435 "driver_specific": { 00:18:12.435 "nvme": [ 00:18:12.435 { 00:18:12.435 "pci_address": "0000:00:11.0", 00:18:12.435 "trid": { 00:18:12.435 "trtype": "PCIe", 00:18:12.435 "traddr": "0000:00:11.0" 00:18:12.435 }, 00:18:12.435 "ctrlr_data": { 00:18:12.435 "cntlid": 0, 00:18:12.435 "vendor_id": "0x1b36", 00:18:12.435 "model_number": "QEMU NVMe Ctrl", 00:18:12.435 "serial_number": "12341", 00:18:12.435 "firmware_revision": "8.0.0", 00:18:12.435 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:12.435 "oacs": { 00:18:12.435 "security": 0, 00:18:12.435 "format": 1, 00:18:12.435 "firmware": 0, 00:18:12.435 "ns_manage": 1 00:18:12.435 }, 00:18:12.435 "multi_ctrlr": false, 00:18:12.435 "ana_reporting": false 00:18:12.435 }, 00:18:12.435 "vs": { 00:18:12.435 "nvme_version": "1.4" 00:18:12.435 }, 00:18:12.435 "ns_data": { 00:18:12.435 "id": 1, 00:18:12.435 "can_share": false 00:18:12.435 } 00:18:12.435 } 00:18:12.435 ], 00:18:12.435 "mp_policy": "active_passive" 00:18:12.435 } 00:18:12.435 } 00:18:12.435 ]' 00:18:12.435 18:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:12.435 18:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:12.435 18:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:12.435 18:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:18:12.435 18:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:18:12.435 18:07:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:18:12.435 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:12.435 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:12.435 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:12.435 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:12.435 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:12.694 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:12.694 18:07:41 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:12.954 18:07:42 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=d7e0e857-c01e-4799-9059-3dae6daba476 00:18:12.954 18:07:42 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d7e0e857-c01e-4799-9059-3dae6daba476 00:18:12.954 18:07:42 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=5fad56cf-4c28-4526-a296-017c418e57e9 00:18:12.954 18:07:42 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5fad56cf-4c28-4526-a296-017c418e57e9 00:18:12.954 18:07:42 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:12.954 18:07:42 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:12.954 18:07:42 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=5fad56cf-4c28-4526-a296-017c418e57e9 00:18:12.954 18:07:42 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:12.954 18:07:42 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 5fad56cf-4c28-4526-a296-017c418e57e9 00:18:12.954 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=5fad56cf-4c28-4526-a296-017c418e57e9 00:18:12.954 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:12.954 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:12.954 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:12.954 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5fad56cf-4c28-4526-a296-017c418e57e9 00:18:13.213 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:13.213 { 00:18:13.213 "name": "5fad56cf-4c28-4526-a296-017c418e57e9", 00:18:13.213 "aliases": [ 00:18:13.213 "lvs/nvme0n1p0" 00:18:13.213 ], 00:18:13.213 "product_name": "Logical Volume", 00:18:13.213 "block_size": 4096, 00:18:13.213 "num_blocks": 26476544, 00:18:13.213 "uuid": "5fad56cf-4c28-4526-a296-017c418e57e9", 00:18:13.213 "assigned_rate_limits": { 00:18:13.213 "rw_ios_per_sec": 0, 00:18:13.213 "rw_mbytes_per_sec": 0, 00:18:13.213 "r_mbytes_per_sec": 0, 00:18:13.213 "w_mbytes_per_sec": 0 00:18:13.213 }, 00:18:13.213 "claimed": false, 00:18:13.213 "zoned": false, 00:18:13.213 "supported_io_types": { 00:18:13.213 "read": true, 00:18:13.213 "write": true, 00:18:13.213 "unmap": true, 00:18:13.213 "flush": false, 00:18:13.213 "reset": true, 00:18:13.213 "nvme_admin": false, 00:18:13.213 "nvme_io": false, 00:18:13.213 "nvme_io_md": false, 00:18:13.213 "write_zeroes": true, 00:18:13.213 "zcopy": false, 00:18:13.213 "get_zone_info": false, 00:18:13.213 "zone_management": false, 00:18:13.213 "zone_append": false, 00:18:13.213 "compare": false, 00:18:13.213 "compare_and_write": false, 00:18:13.213 "abort": false, 00:18:13.213 "seek_hole": true, 00:18:13.213 "seek_data": true, 00:18:13.213 "copy": false, 00:18:13.213 "nvme_iov_md": false 00:18:13.213 }, 00:18:13.213 "driver_specific": { 00:18:13.213 "lvol": { 00:18:13.213 "lvol_store_uuid": "d7e0e857-c01e-4799-9059-3dae6daba476", 00:18:13.213 "base_bdev": "nvme0n1", 00:18:13.213 "thin_provision": true, 00:18:13.213 "num_allocated_clusters": 0, 00:18:13.213 "snapshot": false, 00:18:13.213 "clone": false, 00:18:13.213 "esnap_clone": false 00:18:13.213 } 00:18:13.213 } 00:18:13.213 } 00:18:13.213 ]' 00:18:13.213 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:13.213 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:13.213 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:13.213 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:13.213 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:13.213 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:18:13.213 18:07:42 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:13.213 18:07:42 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:13.213 18:07:42 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:13.472 18:07:42 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:13.473 18:07:42 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:13.473 18:07:42 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 5fad56cf-4c28-4526-a296-017c418e57e9 00:18:13.473 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=5fad56cf-4c28-4526-a296-017c418e57e9 00:18:13.473 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:13.473 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:13.473 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:13.473 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5fad56cf-4c28-4526-a296-017c418e57e9 00:18:13.732 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:13.732 { 00:18:13.732 "name": "5fad56cf-4c28-4526-a296-017c418e57e9", 00:18:13.732 "aliases": [ 00:18:13.732 "lvs/nvme0n1p0" 00:18:13.732 ], 00:18:13.732 "product_name": "Logical Volume", 00:18:13.732 "block_size": 4096, 00:18:13.732 "num_blocks": 26476544, 00:18:13.732 "uuid": "5fad56cf-4c28-4526-a296-017c418e57e9", 00:18:13.732 "assigned_rate_limits": { 00:18:13.732 "rw_ios_per_sec": 0, 00:18:13.732 "rw_mbytes_per_sec": 0, 00:18:13.732 "r_mbytes_per_sec": 0, 00:18:13.732 "w_mbytes_per_sec": 0 00:18:13.732 }, 00:18:13.732 "claimed": false, 00:18:13.732 "zoned": false, 00:18:13.732 "supported_io_types": { 00:18:13.732 "read": true, 00:18:13.732 "write": true, 00:18:13.732 "unmap": true, 00:18:13.732 "flush": false, 00:18:13.732 "reset": true, 00:18:13.732 "nvme_admin": false, 00:18:13.732 "nvme_io": false, 00:18:13.732 "nvme_io_md": false, 00:18:13.732 "write_zeroes": true, 00:18:13.732 "zcopy": false, 00:18:13.732 "get_zone_info": false, 00:18:13.732 "zone_management": false, 00:18:13.732 "zone_append": false, 00:18:13.732 "compare": false, 00:18:13.732 "compare_and_write": false, 00:18:13.732 "abort": false, 00:18:13.732 "seek_hole": true, 00:18:13.732 "seek_data": true, 00:18:13.732 "copy": false, 00:18:13.732 "nvme_iov_md": false 00:18:13.732 }, 00:18:13.732 "driver_specific": { 00:18:13.732 "lvol": { 00:18:13.732 "lvol_store_uuid": "d7e0e857-c01e-4799-9059-3dae6daba476", 00:18:13.732 "base_bdev": "nvme0n1", 00:18:13.732 "thin_provision": true, 00:18:13.732 "num_allocated_clusters": 0, 00:18:13.732 "snapshot": false, 00:18:13.732 "clone": false, 00:18:13.732 "esnap_clone": false 00:18:13.732 } 00:18:13.732 } 00:18:13.732 } 00:18:13.732 ]' 00:18:13.732 18:07:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:13.732 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:13.732 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:13.732 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:13.732 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:13.732 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:18:13.991 18:07:43 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:13.991 18:07:43 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:13.991 18:07:43 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:13.991 18:07:43 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:13.991 18:07:43 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:13.991 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:13.991 18:07:43 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 5fad56cf-4c28-4526-a296-017c418e57e9 00:18:13.991 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=5fad56cf-4c28-4526-a296-017c418e57e9 00:18:13.991 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:13.991 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:18:13.991 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:18:13.991 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5fad56cf-4c28-4526-a296-017c418e57e9 00:18:14.251 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:14.251 { 00:18:14.251 "name": "5fad56cf-4c28-4526-a296-017c418e57e9", 00:18:14.251 "aliases": [ 00:18:14.251 "lvs/nvme0n1p0" 00:18:14.251 ], 00:18:14.251 "product_name": "Logical Volume", 00:18:14.251 "block_size": 4096, 00:18:14.251 "num_blocks": 26476544, 00:18:14.251 "uuid": "5fad56cf-4c28-4526-a296-017c418e57e9", 00:18:14.251 "assigned_rate_limits": { 00:18:14.251 "rw_ios_per_sec": 0, 00:18:14.251 "rw_mbytes_per_sec": 0, 00:18:14.251 "r_mbytes_per_sec": 0, 00:18:14.251 "w_mbytes_per_sec": 0 00:18:14.251 }, 00:18:14.251 "claimed": false, 00:18:14.251 "zoned": false, 00:18:14.251 "supported_io_types": { 00:18:14.251 "read": true, 00:18:14.251 "write": true, 00:18:14.251 "unmap": true, 00:18:14.251 "flush": false, 00:18:14.251 "reset": true, 00:18:14.251 "nvme_admin": false, 00:18:14.251 "nvme_io": false, 00:18:14.251 "nvme_io_md": false, 00:18:14.251 "write_zeroes": true, 00:18:14.251 "zcopy": false, 00:18:14.251 "get_zone_info": false, 00:18:14.251 "zone_management": false, 00:18:14.251 "zone_append": false, 00:18:14.251 "compare": false, 00:18:14.251 "compare_and_write": false, 00:18:14.251 "abort": false, 00:18:14.251 "seek_hole": true, 00:18:14.251 "seek_data": true, 00:18:14.251 "copy": false, 00:18:14.251 "nvme_iov_md": false 00:18:14.251 }, 00:18:14.251 "driver_specific": { 00:18:14.251 "lvol": { 00:18:14.251 "lvol_store_uuid": "d7e0e857-c01e-4799-9059-3dae6daba476", 00:18:14.251 "base_bdev": "nvme0n1", 00:18:14.251 "thin_provision": true, 00:18:14.251 "num_allocated_clusters": 0, 00:18:14.251 "snapshot": false, 00:18:14.251 "clone": false, 00:18:14.251 "esnap_clone": false 00:18:14.251 } 00:18:14.251 } 00:18:14.251 } 00:18:14.251 ]' 00:18:14.251 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:14.251 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:18:14.251 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:14.251 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:14.251 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:14.251 18:07:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:18:14.251 18:07:43 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:14.251 18:07:43 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:14.251 18:07:43 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5fad56cf-4c28-4526-a296-017c418e57e9 -c nvc0n1p0 --l2p_dram_limit 60 00:18:14.511 [2024-11-05 18:07:43.730158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.511 [2024-11-05 18:07:43.730209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:14.511 [2024-11-05 18:07:43.730228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:14.511 [2024-11-05 18:07:43.730239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.511 [2024-11-05 18:07:43.730326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.511 [2024-11-05 18:07:43.730343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:14.511 [2024-11-05 18:07:43.730357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:14.511 [2024-11-05 18:07:43.730367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.511 [2024-11-05 18:07:43.730426] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:14.511 [2024-11-05 18:07:43.731472] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:14.511 [2024-11-05 18:07:43.731504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.511 [2024-11-05 18:07:43.731515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:14.511 [2024-11-05 18:07:43.731529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.108 ms 00:18:14.511 [2024-11-05 18:07:43.731540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.511 [2024-11-05 18:07:43.731624] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3ed027e4-3bd9-43ea-a8ea-bb82561e18ac 00:18:14.511 [2024-11-05 18:07:43.733041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.511 [2024-11-05 18:07:43.733202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:14.511 [2024-11-05 18:07:43.733222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:14.511 [2024-11-05 18:07:43.733236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.511 [2024-11-05 18:07:43.740789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.511 [2024-11-05 18:07:43.740824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:14.511 [2024-11-05 18:07:43.740837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.486 ms 00:18:14.511 [2024-11-05 18:07:43.740850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.511 [2024-11-05 18:07:43.740973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.511 [2024-11-05 18:07:43.740991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:14.511 [2024-11-05 18:07:43.741002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:14.511 [2024-11-05 18:07:43.741019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.511 [2024-11-05 18:07:43.741096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.511 [2024-11-05 18:07:43.741112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:14.511 [2024-11-05 18:07:43.741123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:14.511 [2024-11-05 18:07:43.741136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.511 [2024-11-05 18:07:43.741174] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:14.511 [2024-11-05 18:07:43.746072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.511 [2024-11-05 18:07:43.746108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:14.511 [2024-11-05 18:07:43.746125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.917 ms 00:18:14.511 [2024-11-05 18:07:43.746138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.511 [2024-11-05 18:07:43.746184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.511 [2024-11-05 18:07:43.746195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:14.511 [2024-11-05 18:07:43.746209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:14.511 [2024-11-05 18:07:43.746220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.511 [2024-11-05 18:07:43.746266] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:14.511 [2024-11-05 18:07:43.746433] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:14.511 [2024-11-05 18:07:43.746458] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:14.511 [2024-11-05 18:07:43.746473] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:14.511 [2024-11-05 18:07:43.746489] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:14.511 [2024-11-05 18:07:43.746502] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:14.511 [2024-11-05 18:07:43.746516] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:14.511 [2024-11-05 18:07:43.746527] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:14.512 [2024-11-05 18:07:43.746540] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:14.512 [2024-11-05 18:07:43.746551] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:14.512 [2024-11-05 18:07:43.746564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.512 [2024-11-05 18:07:43.746578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:14.512 [2024-11-05 18:07:43.746593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:18:14.512 [2024-11-05 18:07:43.746605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.512 [2024-11-05 18:07:43.746698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.512 [2024-11-05 18:07:43.746709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:14.512 [2024-11-05 18:07:43.746723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:14.512 [2024-11-05 18:07:43.746733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.512 [2024-11-05 18:07:43.746848] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:14.512 [2024-11-05 18:07:43.746860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:14.512 [2024-11-05 18:07:43.746877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:14.512 [2024-11-05 18:07:43.746888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.512 [2024-11-05 18:07:43.746901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:14.512 [2024-11-05 18:07:43.746911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:14.512 [2024-11-05 18:07:43.746924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:14.512 [2024-11-05 18:07:43.746934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:14.512 [2024-11-05 18:07:43.746946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:14.512 [2024-11-05 18:07:43.746956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:14.512 [2024-11-05 18:07:43.746969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:14.512 [2024-11-05 18:07:43.746979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:14.512 [2024-11-05 18:07:43.746993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:14.512 [2024-11-05 18:07:43.747003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:14.512 [2024-11-05 18:07:43.747016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:14.512 [2024-11-05 18:07:43.747025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.512 [2024-11-05 18:07:43.747043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:14.512 [2024-11-05 18:07:43.747053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:14.512 [2024-11-05 18:07:43.747065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.512 [2024-11-05 18:07:43.747075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:14.512 [2024-11-05 18:07:43.747088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:14.512 [2024-11-05 18:07:43.747098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.512 [2024-11-05 18:07:43.747110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:14.512 [2024-11-05 18:07:43.747120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:14.512 [2024-11-05 18:07:43.747132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.512 [2024-11-05 18:07:43.747142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:14.512 [2024-11-05 18:07:43.747154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:14.512 [2024-11-05 18:07:43.747163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.512 [2024-11-05 18:07:43.747175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:14.512 [2024-11-05 18:07:43.747185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:14.512 [2024-11-05 18:07:43.747197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:14.512 [2024-11-05 18:07:43.747207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:14.512 [2024-11-05 18:07:43.747222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:14.512 [2024-11-05 18:07:43.747232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:14.512 [2024-11-05 18:07:43.747244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:14.512 [2024-11-05 18:07:43.747269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:14.512 [2024-11-05 18:07:43.747282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:14.512 [2024-11-05 18:07:43.747292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:14.512 [2024-11-05 18:07:43.747304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:14.512 [2024-11-05 18:07:43.747314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.512 [2024-11-05 18:07:43.747326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:14.512 [2024-11-05 18:07:43.747336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:14.512 [2024-11-05 18:07:43.747350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.512 [2024-11-05 18:07:43.747360] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:14.512 [2024-11-05 18:07:43.747374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:14.512 [2024-11-05 18:07:43.747384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:14.512 [2024-11-05 18:07:43.747397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:14.512 [2024-11-05 18:07:43.747419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:14.512 [2024-11-05 18:07:43.747435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:14.512 [2024-11-05 18:07:43.747450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:14.512 [2024-11-05 18:07:43.747463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:14.512 [2024-11-05 18:07:43.747473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:14.512 [2024-11-05 18:07:43.747485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:14.512 [2024-11-05 18:07:43.747500] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:14.512 [2024-11-05 18:07:43.747516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:14.512 [2024-11-05 18:07:43.747529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:14.512 [2024-11-05 18:07:43.747543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:14.512 [2024-11-05 18:07:43.747553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:14.512 [2024-11-05 18:07:43.747567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:14.512 [2024-11-05 18:07:43.747578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:14.512 [2024-11-05 18:07:43.747591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:14.512 [2024-11-05 18:07:43.747602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:14.512 [2024-11-05 18:07:43.747615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:14.512 [2024-11-05 18:07:43.747626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:14.512 [2024-11-05 18:07:43.747642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:14.512 [2024-11-05 18:07:43.747653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:14.512 [2024-11-05 18:07:43.747668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:14.512 [2024-11-05 18:07:43.747679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:14.512 [2024-11-05 18:07:43.747692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:14.512 [2024-11-05 18:07:43.747703] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:14.512 [2024-11-05 18:07:43.747717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:14.512 [2024-11-05 18:07:43.747731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:14.512 [2024-11-05 18:07:43.747744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:14.512 [2024-11-05 18:07:43.747756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:14.512 [2024-11-05 18:07:43.747769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:14.512 [2024-11-05 18:07:43.747782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.512 [2024-11-05 18:07:43.747797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:14.512 [2024-11-05 18:07:43.747807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:18:14.512 [2024-11-05 18:07:43.747822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.512 [2024-11-05 18:07:43.747893] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:14.512 [2024-11-05 18:07:43.747912] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:18.708 [2024-11-05 18:07:47.731050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.708 [2024-11-05 18:07:47.731294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:18.708 [2024-11-05 18:07:47.731324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3989.621 ms 00:18:18.708 [2024-11-05 18:07:47.731338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.708 [2024-11-05 18:07:47.768847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.708 [2024-11-05 18:07:47.768898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:18.708 [2024-11-05 18:07:47.768913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.230 ms 00:18:18.708 [2024-11-05 18:07:47.768926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.708 [2024-11-05 18:07:47.769093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.708 [2024-11-05 18:07:47.769110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:18.708 [2024-11-05 18:07:47.769122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:18.708 [2024-11-05 18:07:47.769138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.708 [2024-11-05 18:07:47.825846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.708 [2024-11-05 18:07:47.825898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:18.708 [2024-11-05 18:07:47.825920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.746 ms 00:18:18.708 [2024-11-05 18:07:47.825939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.708 [2024-11-05 18:07:47.825990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.708 [2024-11-05 18:07:47.826008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:18.708 [2024-11-05 18:07:47.826022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:18.708 [2024-11-05 18:07:47.826037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.708 [2024-11-05 18:07:47.826576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.708 [2024-11-05 18:07:47.826599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:18.708 [2024-11-05 18:07:47.826613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:18:18.708 [2024-11-05 18:07:47.826633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.708 [2024-11-05 18:07:47.826787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.708 [2024-11-05 18:07:47.826808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:18.708 [2024-11-05 18:07:47.826822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:18:18.708 [2024-11-05 18:07:47.826842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.708 [2024-11-05 18:07:47.848257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.708 [2024-11-05 18:07:47.848302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:18.708 [2024-11-05 18:07:47.848316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.411 ms 00:18:18.708 [2024-11-05 18:07:47.848345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.708 [2024-11-05 18:07:47.860820] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:18.708 [2024-11-05 18:07:47.877526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.708 [2024-11-05 18:07:47.877591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:18.708 [2024-11-05 18:07:47.877610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.072 ms 00:18:18.708 [2024-11-05 18:07:47.877645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.708 [2024-11-05 18:07:47.974650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.708 [2024-11-05 18:07:47.974703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:18.708 [2024-11-05 18:07:47.974742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.105 ms 00:18:18.708 [2024-11-05 18:07:47.974754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.708 [2024-11-05 18:07:47.974992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.708 [2024-11-05 18:07:47.975009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:18.708 [2024-11-05 18:07:47.975025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:18:18.708 [2024-11-05 18:07:47.975035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.708 [2024-11-05 18:07:48.012125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.708 [2024-11-05 18:07:48.012295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:18.708 [2024-11-05 18:07:48.012321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.080 ms 00:18:18.708 [2024-11-05 18:07:48.012332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.968 [2024-11-05 18:07:48.048093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.968 [2024-11-05 18:07:48.048124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:18.968 [2024-11-05 18:07:48.048142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.766 ms 00:18:18.968 [2024-11-05 18:07:48.048151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.968 [2024-11-05 18:07:48.048883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.968 [2024-11-05 18:07:48.048911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:18.968 [2024-11-05 18:07:48.048926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:18:18.968 [2024-11-05 18:07:48.048936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.968 [2024-11-05 18:07:48.156961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.968 [2024-11-05 18:07:48.157011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:18.968 [2024-11-05 18:07:48.157033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.113 ms 00:18:18.968 [2024-11-05 18:07:48.157047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.968 [2024-11-05 18:07:48.194190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.968 [2024-11-05 18:07:48.194233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:18.968 [2024-11-05 18:07:48.194251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.086 ms 00:18:18.968 [2024-11-05 18:07:48.194262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.968 [2024-11-05 18:07:48.230392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.968 [2024-11-05 18:07:48.230436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:18.968 [2024-11-05 18:07:48.230452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.136 ms 00:18:18.968 [2024-11-05 18:07:48.230462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.968 [2024-11-05 18:07:48.266760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.968 [2024-11-05 18:07:48.266925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:18.968 [2024-11-05 18:07:48.266953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.301 ms 00:18:18.968 [2024-11-05 18:07:48.266963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.968 [2024-11-05 18:07:48.267014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.968 [2024-11-05 18:07:48.267026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:18.968 [2024-11-05 18:07:48.267042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:18.968 [2024-11-05 18:07:48.267055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.968 [2024-11-05 18:07:48.267245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.968 [2024-11-05 18:07:48.267260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:18.968 [2024-11-05 18:07:48.267274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:18.968 [2024-11-05 18:07:48.267284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.968 [2024-11-05 18:07:48.268442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4545.229 ms, result 0 00:18:18.968 { 00:18:18.968 "name": "ftl0", 00:18:18.968 "uuid": "3ed027e4-3bd9-43ea-a8ea-bb82561e18ac" 00:18:18.968 } 00:18:19.227 18:07:48 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:19.227 18:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:18:19.227 18:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:19.227 18:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:18:19.227 18:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:19.227 18:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:19.227 18:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:19.227 18:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:19.487 [ 00:18:19.487 { 00:18:19.487 "name": "ftl0", 00:18:19.487 "aliases": [ 00:18:19.487 "3ed027e4-3bd9-43ea-a8ea-bb82561e18ac" 00:18:19.487 ], 00:18:19.487 "product_name": "FTL disk", 00:18:19.487 "block_size": 4096, 00:18:19.487 "num_blocks": 20971520, 00:18:19.487 "uuid": "3ed027e4-3bd9-43ea-a8ea-bb82561e18ac", 00:18:19.487 "assigned_rate_limits": { 00:18:19.487 "rw_ios_per_sec": 0, 00:18:19.487 "rw_mbytes_per_sec": 0, 00:18:19.487 "r_mbytes_per_sec": 0, 00:18:19.487 "w_mbytes_per_sec": 0 00:18:19.487 }, 00:18:19.487 "claimed": false, 00:18:19.487 "zoned": false, 00:18:19.487 "supported_io_types": { 00:18:19.487 "read": true, 00:18:19.487 "write": true, 00:18:19.487 "unmap": true, 00:18:19.487 "flush": true, 00:18:19.487 "reset": false, 00:18:19.487 "nvme_admin": false, 00:18:19.487 "nvme_io": false, 00:18:19.487 "nvme_io_md": false, 00:18:19.487 "write_zeroes": true, 00:18:19.487 "zcopy": false, 00:18:19.487 "get_zone_info": false, 00:18:19.487 "zone_management": false, 00:18:19.487 "zone_append": false, 00:18:19.487 "compare": false, 00:18:19.487 "compare_and_write": false, 00:18:19.487 "abort": false, 00:18:19.487 "seek_hole": false, 00:18:19.487 "seek_data": false, 00:18:19.487 "copy": false, 00:18:19.487 "nvme_iov_md": false 00:18:19.487 }, 00:18:19.487 "driver_specific": { 00:18:19.487 "ftl": { 00:18:19.487 "base_bdev": "5fad56cf-4c28-4526-a296-017c418e57e9", 00:18:19.487 "cache": "nvc0n1p0" 00:18:19.487 } 00:18:19.487 } 00:18:19.487 } 00:18:19.487 ] 00:18:19.487 18:07:48 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:18:19.487 18:07:48 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:19.487 18:07:48 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:19.746 18:07:48 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:19.746 18:07:48 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:20.007 [2024-11-05 18:07:49.087648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.007 [2024-11-05 18:07:49.087709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:20.007 [2024-11-05 18:07:49.087726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:20.007 [2024-11-05 18:07:49.087739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.007 [2024-11-05 18:07:49.087781] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:20.007 [2024-11-05 18:07:49.092048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.007 [2024-11-05 18:07:49.092220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:20.007 [2024-11-05 18:07:49.092251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.250 ms 00:18:20.007 [2024-11-05 18:07:49.092263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.007 [2024-11-05 18:07:49.092752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.007 [2024-11-05 18:07:49.092772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:20.007 [2024-11-05 18:07:49.092787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:18:20.007 [2024-11-05 18:07:49.092798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.007 [2024-11-05 18:07:49.095424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.007 [2024-11-05 18:07:49.095452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:20.007 [2024-11-05 18:07:49.095467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:18:20.007 [2024-11-05 18:07:49.095477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.007 [2024-11-05 18:07:49.100502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.007 [2024-11-05 18:07:49.100534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:20.007 [2024-11-05 18:07:49.100549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.999 ms 00:18:20.007 [2024-11-05 18:07:49.100559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.007 [2024-11-05 18:07:49.137472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.007 [2024-11-05 18:07:49.137667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:20.007 [2024-11-05 18:07:49.137695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.878 ms 00:18:20.007 [2024-11-05 18:07:49.137705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.007 [2024-11-05 18:07:49.160039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.007 [2024-11-05 18:07:49.160203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:20.007 [2024-11-05 18:07:49.160231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.298 ms 00:18:20.007 [2024-11-05 18:07:49.160245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.007 [2024-11-05 18:07:49.160479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.007 [2024-11-05 18:07:49.160495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:20.007 [2024-11-05 18:07:49.160509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:18:20.007 [2024-11-05 18:07:49.160520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.007 [2024-11-05 18:07:49.197554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.007 [2024-11-05 18:07:49.197594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:20.007 [2024-11-05 18:07:49.197611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.063 ms 00:18:20.007 [2024-11-05 18:07:49.197628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.007 [2024-11-05 18:07:49.234571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.007 [2024-11-05 18:07:49.234611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:20.007 [2024-11-05 18:07:49.234627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.950 ms 00:18:20.007 [2024-11-05 18:07:49.234637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.007 [2024-11-05 18:07:49.270607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.007 [2024-11-05 18:07:49.270646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:20.007 [2024-11-05 18:07:49.270662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.971 ms 00:18:20.007 [2024-11-05 18:07:49.270672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.007 [2024-11-05 18:07:49.307101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.007 [2024-11-05 18:07:49.307141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:20.007 [2024-11-05 18:07:49.307158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.362 ms 00:18:20.007 [2024-11-05 18:07:49.307167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.007 [2024-11-05 18:07:49.307221] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:20.007 [2024-11-05 18:07:49.307238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:20.007 [2024-11-05 18:07:49.307254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:20.007 [2024-11-05 18:07:49.307266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:20.007 [2024-11-05 18:07:49.307280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:20.007 [2024-11-05 18:07:49.307291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:20.007 [2024-11-05 18:07:49.307304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:20.007 [2024-11-05 18:07:49.307315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:20.007 [2024-11-05 18:07:49.307331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:20.007 [2024-11-05 18:07:49.307342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:20.007 [2024-11-05 18:07:49.307356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:20.007 [2024-11-05 18:07:49.307367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:20.007 [2024-11-05 18:07:49.307380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.307999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:20.008 [2024-11-05 18:07:49.308532] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:20.008 [2024-11-05 18:07:49.308545] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3ed027e4-3bd9-43ea-a8ea-bb82561e18ac 00:18:20.009 [2024-11-05 18:07:49.308556] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:20.009 [2024-11-05 18:07:49.308570] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:20.009 [2024-11-05 18:07:49.308580] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:20.009 [2024-11-05 18:07:49.308596] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:20.009 [2024-11-05 18:07:49.308606] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:20.009 [2024-11-05 18:07:49.308619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:20.009 [2024-11-05 18:07:49.308629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:20.009 [2024-11-05 18:07:49.308640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:20.009 [2024-11-05 18:07:49.308649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:20.009 [2024-11-05 18:07:49.308662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.009 [2024-11-05 18:07:49.308672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:20.009 [2024-11-05 18:07:49.308686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.445 ms 00:18:20.009 [2024-11-05 18:07:49.308696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.009 [2024-11-05 18:07:49.329234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.009 [2024-11-05 18:07:49.329272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:20.009 [2024-11-05 18:07:49.329287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.499 ms 00:18:20.009 [2024-11-05 18:07:49.329297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.009 [2024-11-05 18:07:49.329881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.009 [2024-11-05 18:07:49.329903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:20.009 [2024-11-05 18:07:49.329917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:18:20.009 [2024-11-05 18:07:49.329928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.268 [2024-11-05 18:07:49.398939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.268 [2024-11-05 18:07:49.398985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:20.268 [2024-11-05 18:07:49.399001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.268 [2024-11-05 18:07:49.399012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.268 [2024-11-05 18:07:49.399079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.268 [2024-11-05 18:07:49.399090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:20.268 [2024-11-05 18:07:49.399104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.268 [2024-11-05 18:07:49.399113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.268 [2024-11-05 18:07:49.399233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.268 [2024-11-05 18:07:49.399247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:20.268 [2024-11-05 18:07:49.399264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.268 [2024-11-05 18:07:49.399274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.268 [2024-11-05 18:07:49.399308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.268 [2024-11-05 18:07:49.399319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:20.268 [2024-11-05 18:07:49.399331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.268 [2024-11-05 18:07:49.399341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.268 [2024-11-05 18:07:49.530105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.268 [2024-11-05 18:07:49.530166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:20.268 [2024-11-05 18:07:49.530184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.268 [2024-11-05 18:07:49.530211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.527 [2024-11-05 18:07:49.629465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.527 [2024-11-05 18:07:49.629518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:20.527 [2024-11-05 18:07:49.629535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.527 [2024-11-05 18:07:49.629562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.527 [2024-11-05 18:07:49.629693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.527 [2024-11-05 18:07:49.629706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:20.527 [2024-11-05 18:07:49.629720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.527 [2024-11-05 18:07:49.629733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.527 [2024-11-05 18:07:49.629808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.527 [2024-11-05 18:07:49.629820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:20.527 [2024-11-05 18:07:49.629833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.527 [2024-11-05 18:07:49.629843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.527 [2024-11-05 18:07:49.629973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.527 [2024-11-05 18:07:49.629987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:20.527 [2024-11-05 18:07:49.630001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.527 [2024-11-05 18:07:49.630011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.527 [2024-11-05 18:07:49.630072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.527 [2024-11-05 18:07:49.630085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:20.527 [2024-11-05 18:07:49.630097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.527 [2024-11-05 18:07:49.630107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.527 [2024-11-05 18:07:49.630157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.527 [2024-11-05 18:07:49.630168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:20.527 [2024-11-05 18:07:49.630180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.527 [2024-11-05 18:07:49.630190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.527 [2024-11-05 18:07:49.630257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:20.527 [2024-11-05 18:07:49.630269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:20.527 [2024-11-05 18:07:49.630282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:20.527 [2024-11-05 18:07:49.630292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.527 [2024-11-05 18:07:49.630472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 543.666 ms, result 0 00:18:20.527 true 00:18:20.527 18:07:49 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 73836 00:18:20.527 18:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 73836 ']' 00:18:20.527 18:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 73836 00:18:20.527 18:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:18:20.527 18:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:20.527 18:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73836 00:18:20.527 killing process with pid 73836 00:18:20.527 18:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:20.527 18:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:20.527 18:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73836' 00:18:20.527 18:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 73836 00:18:20.527 18:07:49 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 73836 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:24.720 18:07:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:24.980 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:24.980 fio-3.35 00:18:24.980 Starting 1 thread 00:18:30.300 00:18:30.300 test: (groupid=0, jobs=1): err= 0: pid=74053: Tue Nov 5 18:07:59 2024 00:18:30.300 read: IOPS=920, BW=61.2MiB/s (64.1MB/s)(255MiB/4162msec) 00:18:30.300 slat (nsec): min=4425, max=24998, avg=5899.88, stdev=2061.13 00:18:30.300 clat (usec): min=340, max=968, avg=492.95, stdev=42.10 00:18:30.300 lat (usec): min=345, max=974, avg=498.85, stdev=42.31 00:18:30.300 clat percentiles (usec): 00:18:30.300 | 1.00th=[ 383], 5.00th=[ 441], 10.00th=[ 449], 20.00th=[ 453], 00:18:30.300 | 30.00th=[ 457], 40.00th=[ 482], 50.00th=[ 515], 60.00th=[ 515], 00:18:30.300 | 70.00th=[ 519], 80.00th=[ 523], 90.00th=[ 529], 95.00th=[ 537], 00:18:30.300 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 652], 99.95th=[ 660], 00:18:30.300 | 99.99th=[ 971] 00:18:30.300 write: IOPS=927, BW=61.6MiB/s (64.6MB/s)(256MiB/4157msec); 0 zone resets 00:18:30.300 slat (nsec): min=15694, max=62920, avg=19607.26, stdev=4498.43 00:18:30.300 clat (usec): min=395, max=1060, avg=552.87, stdev=63.55 00:18:30.300 lat (usec): min=413, max=1081, avg=572.48, stdev=63.91 00:18:30.300 clat percentiles (usec): 00:18:30.300 | 1.00th=[ 449], 5.00th=[ 469], 10.00th=[ 478], 20.00th=[ 529], 00:18:30.300 | 30.00th=[ 537], 40.00th=[ 537], 50.00th=[ 537], 60.00th=[ 545], 00:18:30.300 | 70.00th=[ 586], 80.00th=[ 603], 90.00th=[ 611], 95.00th=[ 619], 00:18:30.300 | 99.00th=[ 857], 99.50th=[ 914], 99.90th=[ 1004], 99.95th=[ 1029], 00:18:30.300 | 99.99th=[ 1057] 00:18:30.300 bw ( KiB/s): min=59160, max=64328, per=99.96%, avg=63053.00, stdev=1684.48, samples=8 00:18:30.300 iops : min= 870, max= 946, avg=927.25, stdev=24.77, samples=8 00:18:30.300 lat (usec) : 500=28.85%, 750=70.32%, 1000=0.77% 00:18:30.300 lat (msec) : 2=0.07% 00:18:30.300 cpu : usr=99.30%, sys=0.12%, ctx=10, majf=0, minf=1169 00:18:30.300 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:30.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:30.300 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:30.300 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:30.300 00:18:30.300 Run status group 0 (all jobs): 00:18:30.300 READ: bw=61.2MiB/s (64.1MB/s), 61.2MiB/s-61.2MiB/s (64.1MB/s-64.1MB/s), io=255MiB (267MB), run=4162-4162msec 00:18:30.300 WRITE: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=256MiB (269MB), run=4157-4157msec 00:18:32.206 ----------------------------------------------------- 00:18:32.206 Suppressions used: 00:18:32.206 count bytes template 00:18:32.206 1 5 /usr/src/fio/parse.c 00:18:32.206 1 8 libtcmalloc_minimal.so 00:18:32.206 1 904 libcrypto.so 00:18:32.206 ----------------------------------------------------- 00:18:32.206 00:18:32.206 18:08:01 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:32.206 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:32.206 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:32.206 18:08:01 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:32.206 18:08:01 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:32.206 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:32.207 18:08:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:32.466 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:32.466 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:32.466 fio-3.35 00:18:32.466 Starting 2 threads 00:18:59.026 00:18:59.026 first_half: (groupid=0, jobs=1): err= 0: pid=74156: Tue Nov 5 18:08:28 2024 00:18:59.026 read: IOPS=2625, BW=10.3MiB/s (10.8MB/s)(255MiB/24873msec) 00:18:59.026 slat (nsec): min=3365, max=72567, avg=7056.92, stdev=3517.79 00:18:59.026 clat (usec): min=1070, max=301152, avg=38780.08, stdev=20980.37 00:18:59.026 lat (usec): min=1087, max=301159, avg=38787.14, stdev=20980.93 00:18:59.026 clat percentiles (msec): 00:18:59.026 | 1.00th=[ 14], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:18:59.026 | 30.00th=[ 32], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 36], 00:18:59.026 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 43], 95.00th=[ 58], 00:18:59.026 | 99.00th=[ 155], 99.50th=[ 178], 99.90th=[ 213], 99.95th=[ 245], 00:18:59.026 | 99.99th=[ 292] 00:18:59.026 write: IOPS=3162, BW=12.4MiB/s (13.0MB/s)(256MiB/20720msec); 0 zone resets 00:18:59.026 slat (usec): min=4, max=726, avg= 8.73, stdev= 9.25 00:18:59.026 clat (usec): min=472, max=98718, avg=9899.36, stdev=17573.25 00:18:59.026 lat (usec): min=482, max=98740, avg=9908.09, stdev=17573.61 00:18:59.026 clat percentiles (usec): 00:18:59.026 | 1.00th=[ 1139], 5.00th=[ 1582], 10.00th=[ 1827], 20.00th=[ 2245], 00:18:59.026 | 30.00th=[ 3458], 40.00th=[ 4817], 50.00th=[ 5735], 60.00th=[ 6521], 00:18:59.026 | 70.00th=[ 7373], 80.00th=[10421], 90.00th=[12780], 95.00th=[32637], 00:18:59.027 | 99.00th=[89654], 99.50th=[91751], 99.90th=[95945], 99.95th=[96994], 00:18:59.027 | 99.99th=[96994] 00:18:59.027 bw ( KiB/s): min= 2520, max=43464, per=99.34%, avg=22795.13, stdev=10098.55, samples=23 00:18:59.027 iops : min= 630, max=10866, avg=5698.78, stdev=2524.64, samples=23 00:18:59.027 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.19% 00:18:59.027 lat (msec) : 2=7.11%, 4=10.04%, 10=22.60%, 20=7.98%, 50=46.48% 00:18:59.027 lat (msec) : 100=4.12%, 250=1.42%, 500=0.02% 00:18:59.027 cpu : usr=99.18%, sys=0.23%, ctx=41, majf=0, minf=5587 00:18:59.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:59.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.027 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:59.027 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:59.027 second_half: (groupid=0, jobs=1): err= 0: pid=74157: Tue Nov 5 18:08:28 2024 00:18:59.027 read: IOPS=2612, BW=10.2MiB/s (10.7MB/s)(255MiB/25024msec) 00:18:59.027 slat (usec): min=3, max=297, avg= 8.01, stdev= 3.78 00:18:59.027 clat (usec): min=1064, max=305368, avg=37820.66, stdev=21389.01 00:18:59.027 lat (usec): min=1078, max=305372, avg=37828.67, stdev=21389.77 00:18:59.027 clat percentiles (msec): 00:18:59.027 | 1.00th=[ 8], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 32], 00:18:59.027 | 30.00th=[ 32], 40.00th=[ 34], 50.00th=[ 36], 60.00th=[ 36], 00:18:59.027 | 70.00th=[ 37], 80.00th=[ 38], 90.00th=[ 42], 95.00th=[ 55], 00:18:59.027 | 99.00th=[ 155], 99.50th=[ 176], 99.90th=[ 209], 99.95th=[ 239], 00:18:59.027 | 99.99th=[ 300] 00:18:59.027 write: IOPS=2868, BW=11.2MiB/s (11.7MB/s)(256MiB/22849msec); 0 zone resets 00:18:59.027 slat (usec): min=4, max=726, avg= 9.35, stdev= 7.61 00:18:59.027 clat (usec): min=462, max=97837, avg=11117.41, stdev=18582.23 00:18:59.027 lat (usec): min=467, max=97894, avg=11126.75, stdev=18582.81 00:18:59.027 clat percentiles (usec): 00:18:59.027 | 1.00th=[ 1045], 5.00th=[ 1385], 10.00th=[ 1663], 20.00th=[ 2114], 00:18:59.027 | 30.00th=[ 3621], 40.00th=[ 5014], 50.00th=[ 5997], 60.00th=[ 6849], 00:18:59.027 | 70.00th=[ 8455], 80.00th=[11338], 90.00th=[14353], 95.00th=[61080], 00:18:59.027 | 99.00th=[90702], 99.50th=[91751], 99.90th=[94897], 99.95th=[95945], 00:18:59.027 | 99.99th=[98042] 00:18:59.027 bw ( KiB/s): min= 5072, max=49648, per=99.36%, avg=22798.65, stdev=11995.71, samples=23 00:18:59.027 iops : min= 1268, max=12412, avg=5699.65, stdev=2998.91, samples=23 00:18:59.027 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.30% 00:18:59.027 lat (msec) : 2=8.44%, 4=7.55%, 10=23.10%, 20=7.92%, 50=47.39% 00:18:59.027 lat (msec) : 100=3.77%, 250=1.44%, 500=0.02% 00:18:59.027 cpu : usr=99.23%, sys=0.16%, ctx=39, majf=0, minf=5524 00:18:59.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:59.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.027 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:59.027 issued rwts: total=65379,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:59.027 00:18:59.027 Run status group 0 (all jobs): 00:18:59.027 READ: bw=20.4MiB/s (21.4MB/s), 10.2MiB/s-10.3MiB/s (10.7MB/s-10.8MB/s), io=510MiB (535MB), run=24873-25024msec 00:18:59.027 WRITE: bw=22.4MiB/s (23.5MB/s), 11.2MiB/s-12.4MiB/s (11.7MB/s-13.0MB/s), io=512MiB (537MB), run=20720-22849msec 00:19:00.967 ----------------------------------------------------- 00:19:00.967 Suppressions used: 00:19:00.967 count bytes template 00:19:00.967 2 10 /usr/src/fio/parse.c 00:19:00.967 4 384 /usr/src/fio/iolog.c 00:19:00.967 1 8 libtcmalloc_minimal.so 00:19:00.967 1 904 libcrypto.so 00:19:00.967 ----------------------------------------------------- 00:19:00.967 00:19:01.226 18:08:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:01.226 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:01.226 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:01.226 18:08:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:01.227 18:08:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:01.486 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:01.486 fio-3.35 00:19:01.486 Starting 1 thread 00:19:16.382 00:19:16.382 test: (groupid=0, jobs=1): err= 0: pid=74486: Tue Nov 5 18:08:45 2024 00:19:16.382 read: IOPS=7461, BW=29.1MiB/s (30.6MB/s)(255MiB/8739msec) 00:19:16.382 slat (nsec): min=3276, max=55372, avg=7313.02, stdev=4410.74 00:19:16.382 clat (usec): min=634, max=34923, avg=17144.51, stdev=1690.54 00:19:16.382 lat (usec): min=638, max=34927, avg=17151.82, stdev=1691.65 00:19:16.382 clat percentiles (usec): 00:19:16.382 | 1.00th=[15139], 5.00th=[15401], 10.00th=[15533], 20.00th=[15795], 00:19:16.382 | 30.00th=[16057], 40.00th=[16450], 50.00th=[17433], 60.00th=[17695], 00:19:16.382 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18220], 95.00th=[18482], 00:19:16.382 | 99.00th=[24511], 99.50th=[27657], 99.90th=[30278], 99.95th=[30802], 00:19:16.382 | 99.99th=[34341] 00:19:16.382 write: IOPS=14.2k, BW=55.6MiB/s (58.3MB/s)(256MiB/4602msec); 0 zone resets 00:19:16.382 slat (usec): min=4, max=1531, avg= 7.15, stdev=11.10 00:19:16.382 clat (usec): min=539, max=52831, avg=8945.34, stdev=10985.55 00:19:16.382 lat (usec): min=546, max=52837, avg=8952.49, stdev=10985.56 00:19:16.382 clat percentiles (usec): 00:19:16.382 | 1.00th=[ 906], 5.00th=[ 1074], 10.00th=[ 1205], 20.00th=[ 1385], 00:19:16.382 | 30.00th=[ 1565], 40.00th=[ 1909], 50.00th=[ 5735], 60.00th=[ 6587], 00:19:16.382 | 70.00th=[ 7767], 80.00th=[10028], 90.00th=[32637], 95.00th=[34341], 00:19:16.382 | 99.00th=[36439], 99.50th=[36963], 99.90th=[48497], 99.95th=[51119], 00:19:16.382 | 99.99th=[52167] 00:19:16.382 bw ( KiB/s): min= 9056, max=82040, per=92.04%, avg=52428.80, stdev=18671.16, samples=10 00:19:16.382 iops : min= 2264, max=20510, avg=13107.20, stdev=4667.79, samples=10 00:19:16.382 lat (usec) : 750=0.04%, 1000=1.41% 00:19:16.382 lat (msec) : 2=18.98%, 4=0.75%, 10=18.84%, 20=50.88%, 50=9.07% 00:19:16.382 lat (msec) : 100=0.04% 00:19:16.382 cpu : usr=98.80%, sys=0.36%, ctx=27, majf=0, minf=5565 00:19:16.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:16.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:16.382 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:16.382 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:16.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:16.382 00:19:16.382 Run status group 0 (all jobs): 00:19:16.382 READ: bw=29.1MiB/s (30.6MB/s), 29.1MiB/s-29.1MiB/s (30.6MB/s-30.6MB/s), io=255MiB (267MB), run=8739-8739msec 00:19:16.382 WRITE: bw=55.6MiB/s (58.3MB/s), 55.6MiB/s-55.6MiB/s (58.3MB/s-58.3MB/s), io=256MiB (268MB), run=4602-4602msec 00:19:18.288 ----------------------------------------------------- 00:19:18.288 Suppressions used: 00:19:18.288 count bytes template 00:19:18.288 1 5 /usr/src/fio/parse.c 00:19:18.288 2 192 /usr/src/fio/iolog.c 00:19:18.288 1 8 libtcmalloc_minimal.so 00:19:18.288 1 904 libcrypto.so 00:19:18.288 ----------------------------------------------------- 00:19:18.288 00:19:18.288 18:08:47 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:18.288 18:08:47 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:18.288 18:08:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:18.288 18:08:47 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:18.288 Remove shared memory files 00:19:18.288 18:08:47 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:18.288 18:08:47 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:18.288 18:08:47 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:18.288 18:08:47 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:18.288 18:08:47 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57691 /dev/shm/spdk_tgt_trace.pid72725 00:19:18.288 18:08:47 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:18.288 18:08:47 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:18.288 ************************************ 00:19:18.288 END TEST ftl_fio_basic 00:19:18.288 ************************************ 00:19:18.288 00:19:18.288 real 1m7.938s 00:19:18.288 user 2m27.087s 00:19:18.288 sys 0m3.815s 00:19:18.288 18:08:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:18.288 18:08:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:18.288 18:08:47 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:18.288 18:08:47 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:18.288 18:08:47 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:18.288 18:08:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:18.288 ************************************ 00:19:18.288 START TEST ftl_bdevperf 00:19:18.288 ************************************ 00:19:18.288 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:18.548 * Looking for test storage... 00:19:18.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:18.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.548 --rc genhtml_branch_coverage=1 00:19:18.548 --rc genhtml_function_coverage=1 00:19:18.548 --rc genhtml_legend=1 00:19:18.548 --rc geninfo_all_blocks=1 00:19:18.548 --rc geninfo_unexecuted_blocks=1 00:19:18.548 00:19:18.548 ' 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:18.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.548 --rc genhtml_branch_coverage=1 00:19:18.548 --rc genhtml_function_coverage=1 00:19:18.548 --rc genhtml_legend=1 00:19:18.548 --rc geninfo_all_blocks=1 00:19:18.548 --rc geninfo_unexecuted_blocks=1 00:19:18.548 00:19:18.548 ' 00:19:18.548 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:18.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.548 --rc genhtml_branch_coverage=1 00:19:18.548 --rc genhtml_function_coverage=1 00:19:18.548 --rc genhtml_legend=1 00:19:18.548 --rc geninfo_all_blocks=1 00:19:18.548 --rc geninfo_unexecuted_blocks=1 00:19:18.548 00:19:18.548 ' 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:18.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:18.549 --rc genhtml_branch_coverage=1 00:19:18.549 --rc genhtml_function_coverage=1 00:19:18.549 --rc genhtml_legend=1 00:19:18.549 --rc geninfo_all_blocks=1 00:19:18.549 --rc geninfo_unexecuted_blocks=1 00:19:18.549 00:19:18.549 ' 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=74730 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 74730 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 74730 ']' 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:18.549 18:08:47 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:18.549 [2024-11-05 18:08:47.871278] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:19:18.549 [2024-11-05 18:08:47.871627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74730 ] 00:19:18.808 [2024-11-05 18:08:48.052812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.067 [2024-11-05 18:08:48.158768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.636 18:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:19.636 18:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:19:19.636 18:08:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:19.636 18:08:48 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:19.636 18:08:48 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:19.636 18:08:48 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:19.636 18:08:48 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:19.636 18:08:48 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:19.895 18:08:48 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:19.895 18:08:48 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:19.895 18:08:48 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:19.895 18:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:19:19.895 18:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:19.895 18:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:19.895 18:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:19.895 18:08:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:19.895 18:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:19.895 { 00:19:19.895 "name": "nvme0n1", 00:19:19.895 "aliases": [ 00:19:19.895 "73d1cf7f-ec38-4cbb-ab3b-bb682148d2db" 00:19:19.895 ], 00:19:19.895 "product_name": "NVMe disk", 00:19:19.895 "block_size": 4096, 00:19:19.895 "num_blocks": 1310720, 00:19:19.895 "uuid": "73d1cf7f-ec38-4cbb-ab3b-bb682148d2db", 00:19:19.895 "numa_id": -1, 00:19:19.895 "assigned_rate_limits": { 00:19:19.895 "rw_ios_per_sec": 0, 00:19:19.895 "rw_mbytes_per_sec": 0, 00:19:19.895 "r_mbytes_per_sec": 0, 00:19:19.895 "w_mbytes_per_sec": 0 00:19:19.895 }, 00:19:19.895 "claimed": true, 00:19:19.895 "claim_type": "read_many_write_one", 00:19:19.895 "zoned": false, 00:19:19.895 "supported_io_types": { 00:19:19.895 "read": true, 00:19:19.895 "write": true, 00:19:19.895 "unmap": true, 00:19:19.896 "flush": true, 00:19:19.896 "reset": true, 00:19:19.896 "nvme_admin": true, 00:19:19.896 "nvme_io": true, 00:19:19.896 "nvme_io_md": false, 00:19:19.896 "write_zeroes": true, 00:19:19.896 "zcopy": false, 00:19:19.896 "get_zone_info": false, 00:19:19.896 "zone_management": false, 00:19:19.896 "zone_append": false, 00:19:19.896 "compare": true, 00:19:19.896 "compare_and_write": false, 00:19:19.896 "abort": true, 00:19:19.896 "seek_hole": false, 00:19:19.896 "seek_data": false, 00:19:19.896 "copy": true, 00:19:19.896 "nvme_iov_md": false 00:19:19.896 }, 00:19:19.896 "driver_specific": { 00:19:19.896 "nvme": [ 00:19:19.896 { 00:19:19.896 "pci_address": "0000:00:11.0", 00:19:19.896 "trid": { 00:19:19.896 "trtype": "PCIe", 00:19:19.896 "traddr": "0000:00:11.0" 00:19:19.896 }, 00:19:19.896 "ctrlr_data": { 00:19:19.896 "cntlid": 0, 00:19:19.896 "vendor_id": "0x1b36", 00:19:19.896 "model_number": "QEMU NVMe Ctrl", 00:19:19.896 "serial_number": "12341", 00:19:19.896 "firmware_revision": "8.0.0", 00:19:19.896 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:19.896 "oacs": { 00:19:19.896 "security": 0, 00:19:19.896 "format": 1, 00:19:19.896 "firmware": 0, 00:19:19.896 "ns_manage": 1 00:19:19.896 }, 00:19:19.896 "multi_ctrlr": false, 00:19:19.896 "ana_reporting": false 00:19:19.896 }, 00:19:19.896 "vs": { 00:19:19.896 "nvme_version": "1.4" 00:19:19.896 }, 00:19:19.896 "ns_data": { 00:19:19.896 "id": 1, 00:19:19.896 "can_share": false 00:19:19.896 } 00:19:19.896 } 00:19:19.896 ], 00:19:19.896 "mp_policy": "active_passive" 00:19:19.896 } 00:19:19.896 } 00:19:19.896 ]' 00:19:19.896 18:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:19.896 18:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:19.896 18:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:20.155 18:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:19:20.156 18:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:19:20.156 18:08:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:19:20.156 18:08:49 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:20.156 18:08:49 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:20.156 18:08:49 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:20.156 18:08:49 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:20.156 18:08:49 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:20.156 18:08:49 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=d7e0e857-c01e-4799-9059-3dae6daba476 00:19:20.156 18:08:49 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:20.156 18:08:49 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d7e0e857-c01e-4799-9059-3dae6daba476 00:19:20.415 18:08:49 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:20.674 18:08:49 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=c647eca3-e57f-4223-a44c-6132af154941 00:19:20.674 18:08:49 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c647eca3-e57f-4223-a44c-6132af154941 00:19:20.932 18:08:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=01b392a6-eb8d-42d6-85ad-857a493ef5f1 00:19:20.932 18:08:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 01b392a6-eb8d-42d6-85ad-857a493ef5f1 00:19:20.932 18:08:50 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:20.932 18:08:50 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:20.932 18:08:50 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=01b392a6-eb8d-42d6-85ad-857a493ef5f1 00:19:20.932 18:08:50 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:20.932 18:08:50 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 01b392a6-eb8d-42d6-85ad-857a493ef5f1 00:19:20.932 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=01b392a6-eb8d-42d6-85ad-857a493ef5f1 00:19:20.932 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:20.932 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:20.932 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:20.932 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 01b392a6-eb8d-42d6-85ad-857a493ef5f1 00:19:21.192 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:21.192 { 00:19:21.192 "name": "01b392a6-eb8d-42d6-85ad-857a493ef5f1", 00:19:21.192 "aliases": [ 00:19:21.192 "lvs/nvme0n1p0" 00:19:21.192 ], 00:19:21.192 "product_name": "Logical Volume", 00:19:21.192 "block_size": 4096, 00:19:21.192 "num_blocks": 26476544, 00:19:21.192 "uuid": "01b392a6-eb8d-42d6-85ad-857a493ef5f1", 00:19:21.192 "assigned_rate_limits": { 00:19:21.192 "rw_ios_per_sec": 0, 00:19:21.192 "rw_mbytes_per_sec": 0, 00:19:21.192 "r_mbytes_per_sec": 0, 00:19:21.192 "w_mbytes_per_sec": 0 00:19:21.192 }, 00:19:21.192 "claimed": false, 00:19:21.192 "zoned": false, 00:19:21.192 "supported_io_types": { 00:19:21.192 "read": true, 00:19:21.192 "write": true, 00:19:21.192 "unmap": true, 00:19:21.192 "flush": false, 00:19:21.192 "reset": true, 00:19:21.192 "nvme_admin": false, 00:19:21.192 "nvme_io": false, 00:19:21.192 "nvme_io_md": false, 00:19:21.192 "write_zeroes": true, 00:19:21.192 "zcopy": false, 00:19:21.192 "get_zone_info": false, 00:19:21.192 "zone_management": false, 00:19:21.192 "zone_append": false, 00:19:21.192 "compare": false, 00:19:21.192 "compare_and_write": false, 00:19:21.192 "abort": false, 00:19:21.192 "seek_hole": true, 00:19:21.192 "seek_data": true, 00:19:21.192 "copy": false, 00:19:21.192 "nvme_iov_md": false 00:19:21.192 }, 00:19:21.192 "driver_specific": { 00:19:21.192 "lvol": { 00:19:21.192 "lvol_store_uuid": "c647eca3-e57f-4223-a44c-6132af154941", 00:19:21.192 "base_bdev": "nvme0n1", 00:19:21.192 "thin_provision": true, 00:19:21.192 "num_allocated_clusters": 0, 00:19:21.192 "snapshot": false, 00:19:21.192 "clone": false, 00:19:21.192 "esnap_clone": false 00:19:21.192 } 00:19:21.192 } 00:19:21.192 } 00:19:21.192 ]' 00:19:21.192 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:21.192 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:21.192 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:21.192 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:21.192 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:21.192 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:19:21.192 18:08:50 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:21.192 18:08:50 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:21.192 18:08:50 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:21.451 18:08:50 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:21.451 18:08:50 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:21.451 18:08:50 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 01b392a6-eb8d-42d6-85ad-857a493ef5f1 00:19:21.451 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=01b392a6-eb8d-42d6-85ad-857a493ef5f1 00:19:21.451 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:21.451 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:21.451 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:21.451 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 01b392a6-eb8d-42d6-85ad-857a493ef5f1 00:19:21.711 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:21.711 { 00:19:21.711 "name": "01b392a6-eb8d-42d6-85ad-857a493ef5f1", 00:19:21.711 "aliases": [ 00:19:21.711 "lvs/nvme0n1p0" 00:19:21.711 ], 00:19:21.711 "product_name": "Logical Volume", 00:19:21.711 "block_size": 4096, 00:19:21.711 "num_blocks": 26476544, 00:19:21.711 "uuid": "01b392a6-eb8d-42d6-85ad-857a493ef5f1", 00:19:21.711 "assigned_rate_limits": { 00:19:21.711 "rw_ios_per_sec": 0, 00:19:21.711 "rw_mbytes_per_sec": 0, 00:19:21.711 "r_mbytes_per_sec": 0, 00:19:21.711 "w_mbytes_per_sec": 0 00:19:21.711 }, 00:19:21.711 "claimed": false, 00:19:21.711 "zoned": false, 00:19:21.711 "supported_io_types": { 00:19:21.711 "read": true, 00:19:21.711 "write": true, 00:19:21.711 "unmap": true, 00:19:21.711 "flush": false, 00:19:21.711 "reset": true, 00:19:21.711 "nvme_admin": false, 00:19:21.711 "nvme_io": false, 00:19:21.711 "nvme_io_md": false, 00:19:21.711 "write_zeroes": true, 00:19:21.711 "zcopy": false, 00:19:21.711 "get_zone_info": false, 00:19:21.711 "zone_management": false, 00:19:21.711 "zone_append": false, 00:19:21.711 "compare": false, 00:19:21.711 "compare_and_write": false, 00:19:21.711 "abort": false, 00:19:21.711 "seek_hole": true, 00:19:21.711 "seek_data": true, 00:19:21.711 "copy": false, 00:19:21.711 "nvme_iov_md": false 00:19:21.711 }, 00:19:21.711 "driver_specific": { 00:19:21.711 "lvol": { 00:19:21.711 "lvol_store_uuid": "c647eca3-e57f-4223-a44c-6132af154941", 00:19:21.711 "base_bdev": "nvme0n1", 00:19:21.711 "thin_provision": true, 00:19:21.711 "num_allocated_clusters": 0, 00:19:21.711 "snapshot": false, 00:19:21.711 "clone": false, 00:19:21.711 "esnap_clone": false 00:19:21.711 } 00:19:21.711 } 00:19:21.711 } 00:19:21.711 ]' 00:19:21.711 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:21.711 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:21.711 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:21.711 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:21.711 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:21.711 18:08:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:19:21.711 18:08:50 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:21.711 18:08:50 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:21.970 18:08:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:21.970 18:08:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 01b392a6-eb8d-42d6-85ad-857a493ef5f1 00:19:21.970 18:08:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=01b392a6-eb8d-42d6-85ad-857a493ef5f1 00:19:21.970 18:08:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:21.970 18:08:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:19:21.970 18:08:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:19:21.970 18:08:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 01b392a6-eb8d-42d6-85ad-857a493ef5f1 00:19:22.229 18:08:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:22.229 { 00:19:22.229 "name": "01b392a6-eb8d-42d6-85ad-857a493ef5f1", 00:19:22.229 "aliases": [ 00:19:22.229 "lvs/nvme0n1p0" 00:19:22.229 ], 00:19:22.229 "product_name": "Logical Volume", 00:19:22.229 "block_size": 4096, 00:19:22.229 "num_blocks": 26476544, 00:19:22.229 "uuid": "01b392a6-eb8d-42d6-85ad-857a493ef5f1", 00:19:22.229 "assigned_rate_limits": { 00:19:22.229 "rw_ios_per_sec": 0, 00:19:22.229 "rw_mbytes_per_sec": 0, 00:19:22.229 "r_mbytes_per_sec": 0, 00:19:22.229 "w_mbytes_per_sec": 0 00:19:22.229 }, 00:19:22.229 "claimed": false, 00:19:22.229 "zoned": false, 00:19:22.229 "supported_io_types": { 00:19:22.229 "read": true, 00:19:22.229 "write": true, 00:19:22.229 "unmap": true, 00:19:22.229 "flush": false, 00:19:22.229 "reset": true, 00:19:22.229 "nvme_admin": false, 00:19:22.229 "nvme_io": false, 00:19:22.229 "nvme_io_md": false, 00:19:22.229 "write_zeroes": true, 00:19:22.229 "zcopy": false, 00:19:22.229 "get_zone_info": false, 00:19:22.229 "zone_management": false, 00:19:22.229 "zone_append": false, 00:19:22.229 "compare": false, 00:19:22.229 "compare_and_write": false, 00:19:22.229 "abort": false, 00:19:22.229 "seek_hole": true, 00:19:22.229 "seek_data": true, 00:19:22.229 "copy": false, 00:19:22.229 "nvme_iov_md": false 00:19:22.229 }, 00:19:22.229 "driver_specific": { 00:19:22.230 "lvol": { 00:19:22.230 "lvol_store_uuid": "c647eca3-e57f-4223-a44c-6132af154941", 00:19:22.230 "base_bdev": "nvme0n1", 00:19:22.230 "thin_provision": true, 00:19:22.230 "num_allocated_clusters": 0, 00:19:22.230 "snapshot": false, 00:19:22.230 "clone": false, 00:19:22.230 "esnap_clone": false 00:19:22.230 } 00:19:22.230 } 00:19:22.230 } 00:19:22.230 ]' 00:19:22.230 18:08:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:22.230 18:08:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:19:22.230 18:08:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:22.230 18:08:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:22.230 18:08:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:22.230 18:08:51 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:19:22.230 18:08:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:22.230 18:08:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 01b392a6-eb8d-42d6-85ad-857a493ef5f1 -c nvc0n1p0 --l2p_dram_limit 20 00:19:22.490 [2024-11-05 18:08:51.631212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.490 [2024-11-05 18:08:51.631260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:22.490 [2024-11-05 18:08:51.631275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:22.490 [2024-11-05 18:08:51.631287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.490 [2024-11-05 18:08:51.631340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.490 [2024-11-05 18:08:51.631357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.490 [2024-11-05 18:08:51.631367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:22.490 [2024-11-05 18:08:51.631379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.490 [2024-11-05 18:08:51.631397] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:22.490 [2024-11-05 18:08:51.632418] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:22.490 [2024-11-05 18:08:51.632445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.490 [2024-11-05 18:08:51.632459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.490 [2024-11-05 18:08:51.632470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:19:22.490 [2024-11-05 18:08:51.632482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.490 [2024-11-05 18:08:51.632559] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b16fe3d6-2ddc-4ca8-87bd-15dd1162d31a 00:19:22.490 [2024-11-05 18:08:51.634001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.490 [2024-11-05 18:08:51.634184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:22.490 [2024-11-05 18:08:51.634211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:22.490 [2024-11-05 18:08:51.634228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.490 [2024-11-05 18:08:51.641849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.490 [2024-11-05 18:08:51.641874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.490 [2024-11-05 18:08:51.641887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.577 ms 00:19:22.490 [2024-11-05 18:08:51.641897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.490 [2024-11-05 18:08:51.641993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.490 [2024-11-05 18:08:51.642006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.490 [2024-11-05 18:08:51.642023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:22.490 [2024-11-05 18:08:51.642033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.490 [2024-11-05 18:08:51.642079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.490 [2024-11-05 18:08:51.642090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:22.490 [2024-11-05 18:08:51.642103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:22.490 [2024-11-05 18:08:51.642112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.490 [2024-11-05 18:08:51.642135] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:22.490 [2024-11-05 18:08:51.646633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.490 [2024-11-05 18:08:51.646665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.490 [2024-11-05 18:08:51.646676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.513 ms 00:19:22.490 [2024-11-05 18:08:51.646706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.490 [2024-11-05 18:08:51.646739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.490 [2024-11-05 18:08:51.646751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:22.490 [2024-11-05 18:08:51.646762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:22.490 [2024-11-05 18:08:51.646774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.490 [2024-11-05 18:08:51.646821] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:22.491 [2024-11-05 18:08:51.646951] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:22.491 [2024-11-05 18:08:51.646965] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:22.491 [2024-11-05 18:08:51.646982] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:22.491 [2024-11-05 18:08:51.646994] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:22.491 [2024-11-05 18:08:51.647009] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:22.491 [2024-11-05 18:08:51.647019] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:22.491 [2024-11-05 18:08:51.647032] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:22.491 [2024-11-05 18:08:51.647041] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:22.491 [2024-11-05 18:08:51.647053] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:22.491 [2024-11-05 18:08:51.647063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.491 [2024-11-05 18:08:51.647079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:22.491 [2024-11-05 18:08:51.647089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:19:22.491 [2024-11-05 18:08:51.647103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.491 [2024-11-05 18:08:51.647171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.491 [2024-11-05 18:08:51.647186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:22.491 [2024-11-05 18:08:51.647197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:22.491 [2024-11-05 18:08:51.647212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.491 [2024-11-05 18:08:51.647289] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:22.491 [2024-11-05 18:08:51.647304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:22.491 [2024-11-05 18:08:51.647317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.491 [2024-11-05 18:08:51.647330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.491 [2024-11-05 18:08:51.647340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:22.491 [2024-11-05 18:08:51.647351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:22.491 [2024-11-05 18:08:51.647361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:22.491 [2024-11-05 18:08:51.647373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:22.491 [2024-11-05 18:08:51.647382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:22.491 [2024-11-05 18:08:51.647393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.491 [2024-11-05 18:08:51.647402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:22.491 [2024-11-05 18:08:51.647414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:22.491 [2024-11-05 18:08:51.647443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:22.491 [2024-11-05 18:08:51.647469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:22.491 [2024-11-05 18:08:51.647478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:22.491 [2024-11-05 18:08:51.647493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.491 [2024-11-05 18:08:51.647502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:22.491 [2024-11-05 18:08:51.647514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:22.491 [2024-11-05 18:08:51.647523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.491 [2024-11-05 18:08:51.647536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:22.491 [2024-11-05 18:08:51.647546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:22.491 [2024-11-05 18:08:51.647557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.491 [2024-11-05 18:08:51.647566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:22.491 [2024-11-05 18:08:51.647578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:22.491 [2024-11-05 18:08:51.647587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.491 [2024-11-05 18:08:51.647598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:22.491 [2024-11-05 18:08:51.647607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:22.491 [2024-11-05 18:08:51.647618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.491 [2024-11-05 18:08:51.647627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:22.491 [2024-11-05 18:08:51.647639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:22.491 [2024-11-05 18:08:51.647648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:22.491 [2024-11-05 18:08:51.647661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:22.491 [2024-11-05 18:08:51.647670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:22.491 [2024-11-05 18:08:51.647681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.491 [2024-11-05 18:08:51.647690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:22.491 [2024-11-05 18:08:51.647702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:22.491 [2024-11-05 18:08:51.647726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:22.491 [2024-11-05 18:08:51.647737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:22.491 [2024-11-05 18:08:51.647746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:22.491 [2024-11-05 18:08:51.647757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.491 [2024-11-05 18:08:51.647766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:22.491 [2024-11-05 18:08:51.647778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:22.491 [2024-11-05 18:08:51.647787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.491 [2024-11-05 18:08:51.647799] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:22.491 [2024-11-05 18:08:51.647809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:22.491 [2024-11-05 18:08:51.647823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:22.491 [2024-11-05 18:08:51.647833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:22.491 [2024-11-05 18:08:51.647851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:22.491 [2024-11-05 18:08:51.647860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:22.491 [2024-11-05 18:08:51.647873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:22.491 [2024-11-05 18:08:51.647882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:22.491 [2024-11-05 18:08:51.647893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:22.491 [2024-11-05 18:08:51.647903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:22.491 [2024-11-05 18:08:51.647918] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:22.491 [2024-11-05 18:08:51.647931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.491 [2024-11-05 18:08:51.647945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:22.491 [2024-11-05 18:08:51.647956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:22.491 [2024-11-05 18:08:51.647969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:22.491 [2024-11-05 18:08:51.647979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:22.491 [2024-11-05 18:08:51.647992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:22.491 [2024-11-05 18:08:51.648002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:22.491 [2024-11-05 18:08:51.648015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:22.491 [2024-11-05 18:08:51.648025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:22.491 [2024-11-05 18:08:51.648040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:22.491 [2024-11-05 18:08:51.648050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:22.491 [2024-11-05 18:08:51.648063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:22.491 [2024-11-05 18:08:51.648073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:22.491 [2024-11-05 18:08:51.648085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:22.491 [2024-11-05 18:08:51.648096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:22.491 [2024-11-05 18:08:51.648108] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:22.491 [2024-11-05 18:08:51.648120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:22.491 [2024-11-05 18:08:51.648134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:22.491 [2024-11-05 18:08:51.648145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:22.491 [2024-11-05 18:08:51.648157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:22.491 [2024-11-05 18:08:51.648168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:22.491 [2024-11-05 18:08:51.648182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.491 [2024-11-05 18:08:51.648195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:22.491 [2024-11-05 18:08:51.648208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:19:22.491 [2024-11-05 18:08:51.648218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.491 [2024-11-05 18:08:51.648258] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:22.491 [2024-11-05 18:08:51.648276] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:26.686 [2024-11-05 18:08:55.404369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.686 [2024-11-05 18:08:55.404667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:26.686 [2024-11-05 18:08:55.404780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3762.203 ms 00:19:26.686 [2024-11-05 18:08:55.404818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.686 [2024-11-05 18:08:55.437556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.686 [2024-11-05 18:08:55.437804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:26.686 [2024-11-05 18:08:55.437949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.428 ms 00:19:26.686 [2024-11-05 18:08:55.437988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.686 [2024-11-05 18:08:55.438133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.686 [2024-11-05 18:08:55.438281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:26.686 [2024-11-05 18:08:55.438362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:26.686 [2024-11-05 18:08:55.438392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.686 [2024-11-05 18:08:55.508041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.686 [2024-11-05 18:08:55.508229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:26.686 [2024-11-05 18:08:55.508373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.664 ms 00:19:26.686 [2024-11-05 18:08:55.508391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.686 [2024-11-05 18:08:55.508447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.686 [2024-11-05 18:08:55.508463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:26.686 [2024-11-05 18:08:55.508477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:26.686 [2024-11-05 18:08:55.508487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.686 [2024-11-05 18:08:55.508971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.508985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:26.687 [2024-11-05 18:08:55.508998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:19:26.687 [2024-11-05 18:08:55.509008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.509113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.509126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:26.687 [2024-11-05 18:08:55.509142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:26.687 [2024-11-05 18:08:55.509153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.527587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.527620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:26.687 [2024-11-05 18:08:55.527635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.444 ms 00:19:26.687 [2024-11-05 18:08:55.527645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.539932] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:26.687 [2024-11-05 18:08:55.545825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.545860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:26.687 [2024-11-05 18:08:55.545872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.136 ms 00:19:26.687 [2024-11-05 18:08:55.545885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.636454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.636700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:26.687 [2024-11-05 18:08:55.636725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.690 ms 00:19:26.687 [2024-11-05 18:08:55.636740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.636917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.636938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:26.687 [2024-11-05 18:08:55.636950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:19:26.687 [2024-11-05 18:08:55.636963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.672721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.672762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:26.687 [2024-11-05 18:08:55.672776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.762 ms 00:19:26.687 [2024-11-05 18:08:55.672790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.707171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.707329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:26.687 [2024-11-05 18:08:55.707351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.400 ms 00:19:26.687 [2024-11-05 18:08:55.707364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.708086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.708113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:26.687 [2024-11-05 18:08:55.708125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:19:26.687 [2024-11-05 18:08:55.708138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.804940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.804988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:26.687 [2024-11-05 18:08:55.805002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.906 ms 00:19:26.687 [2024-11-05 18:08:55.805016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.840750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.840791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:26.687 [2024-11-05 18:08:55.840805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.719 ms 00:19:26.687 [2024-11-05 18:08:55.840820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.874544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.874584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:26.687 [2024-11-05 18:08:55.874597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.743 ms 00:19:26.687 [2024-11-05 18:08:55.874609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.908568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.908609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:26.687 [2024-11-05 18:08:55.908622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.978 ms 00:19:26.687 [2024-11-05 18:08:55.908634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.908673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.908690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:26.687 [2024-11-05 18:08:55.908700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:26.687 [2024-11-05 18:08:55.908712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.908803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.687 [2024-11-05 18:08:55.908819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:26.687 [2024-11-05 18:08:55.908828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:26.687 [2024-11-05 18:08:55.908840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.687 [2024-11-05 18:08:55.909833] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4285.103 ms, result 0 00:19:26.687 { 00:19:26.687 "name": "ftl0", 00:19:26.687 "uuid": "b16fe3d6-2ddc-4ca8-87bd-15dd1162d31a" 00:19:26.687 } 00:19:26.687 18:08:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:26.687 18:08:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:26.687 18:08:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:26.946 18:08:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:26.946 [2024-11-05 18:08:56.233822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:26.946 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:26.946 Zero copy mechanism will not be used. 00:19:26.947 Running I/O for 4 seconds... 00:19:29.263 1414.00 IOPS, 93.90 MiB/s [2024-11-05T18:08:59.523Z] 1434.50 IOPS, 95.26 MiB/s [2024-11-05T18:09:00.461Z] 1473.33 IOPS, 97.84 MiB/s [2024-11-05T18:09:00.461Z] 1502.50 IOPS, 99.78 MiB/s 00:19:31.138 Latency(us) 00:19:31.138 [2024-11-05T18:09:00.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.138 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:31.138 ftl0 : 4.00 1502.14 99.75 0.00 0.00 698.61 250.04 2158.21 00:19:31.138 [2024-11-05T18:09:00.461Z] =================================================================================================================== 00:19:31.138 [2024-11-05T18:09:00.461Z] Total : 1502.14 99.75 0.00 0.00 698.61 250.04 2158.21 00:19:31.138 [2024-11-05 18:09:00.238011] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:31.138 { 00:19:31.138 "results": [ 00:19:31.138 { 00:19:31.138 "job": "ftl0", 00:19:31.138 "core_mask": "0x1", 00:19:31.138 "workload": "randwrite", 00:19:31.138 "status": "finished", 00:19:31.138 "queue_depth": 1, 00:19:31.138 "io_size": 69632, 00:19:31.138 "runtime": 4.00163, 00:19:31.138 "iops": 1502.1378788143832, 00:19:31.138 "mibps": 99.75134351501764, 00:19:31.138 "io_failed": 0, 00:19:31.138 "io_timeout": 0, 00:19:31.138 "avg_latency_us": 698.6148441378223, 00:19:31.138 "min_latency_us": 250.03694779116466, 00:19:31.138 "max_latency_us": 2158.213654618474 00:19:31.138 } 00:19:31.138 ], 00:19:31.138 "core_count": 1 00:19:31.138 } 00:19:31.138 18:09:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:31.138 [2024-11-05 18:09:00.375123] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:31.138 Running I/O for 4 seconds... 00:19:33.455 11742.00 IOPS, 45.87 MiB/s [2024-11-05T18:09:03.716Z] 11608.00 IOPS, 45.34 MiB/s [2024-11-05T18:09:04.653Z] 11462.33 IOPS, 44.77 MiB/s [2024-11-05T18:09:04.653Z] 11470.75 IOPS, 44.81 MiB/s 00:19:35.330 Latency(us) 00:19:35.330 [2024-11-05T18:09:04.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.330 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:35.330 ftl0 : 4.01 11459.21 44.76 0.00 0.00 11148.19 209.73 25582.73 00:19:35.330 [2024-11-05T18:09:04.653Z] =================================================================================================================== 00:19:35.330 [2024-11-05T18:09:04.653Z] Total : 11459.21 44.76 0.00 0.00 11148.19 0.00 25582.73 00:19:35.330 [2024-11-05 18:09:04.393700] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:35.330 { 00:19:35.330 "results": [ 00:19:35.330 { 00:19:35.330 "job": "ftl0", 00:19:35.330 "core_mask": "0x1", 00:19:35.330 "workload": "randwrite", 00:19:35.330 "status": "finished", 00:19:35.330 "queue_depth": 128, 00:19:35.330 "io_size": 4096, 00:19:35.330 "runtime": 4.014848, 00:19:35.330 "iops": 11459.21339985972, 00:19:35.330 "mibps": 44.76255234320203, 00:19:35.330 "io_failed": 0, 00:19:35.330 "io_timeout": 0, 00:19:35.330 "avg_latency_us": 11148.189393005761, 00:19:35.330 "min_latency_us": 209.73493975903614, 00:19:35.330 "max_latency_us": 25582.727710843374 00:19:35.330 } 00:19:35.330 ], 00:19:35.330 "core_count": 1 00:19:35.330 } 00:19:35.330 18:09:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:35.330 [2024-11-05 18:09:04.509589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:35.330 Running I/O for 4 seconds... 00:19:37.205 9000.00 IOPS, 35.16 MiB/s [2024-11-05T18:09:07.908Z] 8960.00 IOPS, 35.00 MiB/s [2024-11-05T18:09:08.877Z] 8967.00 IOPS, 35.03 MiB/s [2024-11-05T18:09:08.877Z] 9022.50 IOPS, 35.24 MiB/s 00:19:39.554 Latency(us) 00:19:39.554 [2024-11-05T18:09:08.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.554 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:39.554 Verification LBA range: start 0x0 length 0x1400000 00:19:39.554 ftl0 : 4.01 9033.68 35.29 0.00 0.00 14127.62 238.52 30951.94 00:19:39.554 [2024-11-05T18:09:08.877Z] =================================================================================================================== 00:19:39.554 [2024-11-05T18:09:08.877Z] Total : 9033.68 35.29 0.00 0.00 14127.62 0.00 30951.94 00:19:39.554 [2024-11-05 18:09:08.530903] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:39.554 { 00:19:39.554 "results": [ 00:19:39.554 { 00:19:39.554 "job": "ftl0", 00:19:39.554 "core_mask": "0x1", 00:19:39.554 "workload": "verify", 00:19:39.554 "status": "finished", 00:19:39.554 "verify_range": { 00:19:39.554 "start": 0, 00:19:39.554 "length": 20971520 00:19:39.554 }, 00:19:39.554 "queue_depth": 128, 00:19:39.554 "io_size": 4096, 00:19:39.554 "runtime": 4.008996, 00:19:39.554 "iops": 9033.683246378894, 00:19:39.554 "mibps": 35.287825181167555, 00:19:39.554 "io_failed": 0, 00:19:39.554 "io_timeout": 0, 00:19:39.554 "avg_latency_us": 14127.615505361406, 00:19:39.554 "min_latency_us": 238.52208835341366, 00:19:39.554 "max_latency_us": 30951.942168674697 00:19:39.554 } 00:19:39.554 ], 00:19:39.554 "core_count": 1 00:19:39.554 } 00:19:39.554 18:09:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:39.554 [2024-11-05 18:09:08.737914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.554 [2024-11-05 18:09:08.737959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:39.554 [2024-11-05 18:09:08.737976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:39.554 [2024-11-05 18:09:08.737989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.554 [2024-11-05 18:09:08.738010] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:39.554 [2024-11-05 18:09:08.742172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.554 [2024-11-05 18:09:08.742203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:39.554 [2024-11-05 18:09:08.742219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.148 ms 00:19:39.554 [2024-11-05 18:09:08.742230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.554 [2024-11-05 18:09:08.743891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.554 [2024-11-05 18:09:08.743929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:39.554 [2024-11-05 18:09:08.743945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.634 ms 00:19:39.554 [2024-11-05 18:09:08.743955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.814 [2024-11-05 18:09:08.944654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.814 [2024-11-05 18:09:08.944699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:39.814 [2024-11-05 18:09:08.944721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 200.996 ms 00:19:39.814 [2024-11-05 18:09:08.944732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.814 [2024-11-05 18:09:08.949631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.814 [2024-11-05 18:09:08.949670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:39.814 [2024-11-05 18:09:08.949686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.853 ms 00:19:39.814 [2024-11-05 18:09:08.949696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.814 [2024-11-05 18:09:08.984264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.814 [2024-11-05 18:09:08.984307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:39.814 [2024-11-05 18:09:08.984324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.542 ms 00:19:39.814 [2024-11-05 18:09:08.984334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.814 [2024-11-05 18:09:09.005444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.814 [2024-11-05 18:09:09.005473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:39.814 [2024-11-05 18:09:09.005492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.104 ms 00:19:39.814 [2024-11-05 18:09:09.005502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.814 [2024-11-05 18:09:09.005631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.814 [2024-11-05 18:09:09.005652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:39.814 [2024-11-05 18:09:09.005667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:39.814 [2024-11-05 18:09:09.005677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.814 [2024-11-05 18:09:09.040055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.814 [2024-11-05 18:09:09.040189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:39.814 [2024-11-05 18:09:09.040325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.412 ms 00:19:39.814 [2024-11-05 18:09:09.040362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.814 [2024-11-05 18:09:09.073941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.814 [2024-11-05 18:09:09.074081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:39.814 [2024-11-05 18:09:09.074200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.541 ms 00:19:39.814 [2024-11-05 18:09:09.074236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.814 [2024-11-05 18:09:09.107660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.814 [2024-11-05 18:09:09.107777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:39.814 [2024-11-05 18:09:09.107865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.418 ms 00:19:39.814 [2024-11-05 18:09:09.107900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.075 [2024-11-05 18:09:09.140891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.075 [2024-11-05 18:09:09.141026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:40.075 [2024-11-05 18:09:09.141164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.937 ms 00:19:40.075 [2024-11-05 18:09:09.141201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.075 [2024-11-05 18:09:09.141259] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:40.075 [2024-11-05 18:09:09.141300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.141991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.142002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.142015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.142025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:40.075 [2024-11-05 18:09:09.142038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:40.076 [2024-11-05 18:09:09.142599] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:40.076 [2024-11-05 18:09:09.142612] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b16fe3d6-2ddc-4ca8-87bd-15dd1162d31a 00:19:40.076 [2024-11-05 18:09:09.142623] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:40.076 [2024-11-05 18:09:09.142635] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:40.076 [2024-11-05 18:09:09.142648] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:40.076 [2024-11-05 18:09:09.142660] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:40.076 [2024-11-05 18:09:09.142670] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:40.076 [2024-11-05 18:09:09.142682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:40.076 [2024-11-05 18:09:09.142692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:40.076 [2024-11-05 18:09:09.142706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:40.076 [2024-11-05 18:09:09.142715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:40.076 [2024-11-05 18:09:09.142728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-11-05 18:09:09.142738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:40.076 [2024-11-05 18:09:09.142751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:19:40.076 [2024-11-05 18:09:09.142761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-11-05 18:09:09.162235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-11-05 18:09:09.162356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:40.076 [2024-11-05 18:09:09.162460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.453 ms 00:19:40.076 [2024-11-05 18:09:09.162497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-11-05 18:09:09.163044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.076 [2024-11-05 18:09:09.163085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:40.076 [2024-11-05 18:09:09.163226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:19:40.076 [2024-11-05 18:09:09.163263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-11-05 18:09:09.214728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.076 [2024-11-05 18:09:09.214865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.076 [2024-11-05 18:09:09.215028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.076 [2024-11-05 18:09:09.215066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-11-05 18:09:09.215139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.076 [2024-11-05 18:09:09.215206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.076 [2024-11-05 18:09:09.215244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.076 [2024-11-05 18:09:09.215274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-11-05 18:09:09.215453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.076 [2024-11-05 18:09:09.215555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.076 [2024-11-05 18:09:09.215628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.076 [2024-11-05 18:09:09.215662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-11-05 18:09:09.215706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.076 [2024-11-05 18:09:09.215737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.076 [2024-11-05 18:09:09.215811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.076 [2024-11-05 18:09:09.215843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.076 [2024-11-05 18:09:09.333546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.076 [2024-11-05 18:09:09.333753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.076 [2024-11-05 18:09:09.333875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.076 [2024-11-05 18:09:09.333911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-11-05 18:09:09.429132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.336 [2024-11-05 18:09:09.429315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.336 [2024-11-05 18:09:09.429456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.336 [2024-11-05 18:09:09.429496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-11-05 18:09:09.429628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.336 [2024-11-05 18:09:09.429734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.336 [2024-11-05 18:09:09.429780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.336 [2024-11-05 18:09:09.429810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-11-05 18:09:09.429938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.336 [2024-11-05 18:09:09.429978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.336 [2024-11-05 18:09:09.430062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.336 [2024-11-05 18:09:09.430097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-11-05 18:09:09.430248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.336 [2024-11-05 18:09:09.430339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.336 [2024-11-05 18:09:09.430362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.336 [2024-11-05 18:09:09.430373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-11-05 18:09:09.430428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.336 [2024-11-05 18:09:09.430441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:40.336 [2024-11-05 18:09:09.430455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.336 [2024-11-05 18:09:09.430465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-11-05 18:09:09.430504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.336 [2024-11-05 18:09:09.430514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.336 [2024-11-05 18:09:09.430527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.336 [2024-11-05 18:09:09.430541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-11-05 18:09:09.430585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:40.336 [2024-11-05 18:09:09.430606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.336 [2024-11-05 18:09:09.430619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:40.336 [2024-11-05 18:09:09.430629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.336 [2024-11-05 18:09:09.430753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 693.919 ms, result 0 00:19:40.336 true 00:19:40.336 18:09:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 74730 00:19:40.336 18:09:09 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 74730 ']' 00:19:40.336 18:09:09 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 74730 00:19:40.336 18:09:09 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:19:40.336 18:09:09 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:40.336 18:09:09 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74730 00:19:40.336 killing process with pid 74730 00:19:40.336 Received shutdown signal, test time was about 4.000000 seconds 00:19:40.336 00:19:40.336 Latency(us) 00:19:40.336 [2024-11-05T18:09:09.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.336 [2024-11-05T18:09:09.659Z] =================================================================================================================== 00:19:40.336 [2024-11-05T18:09:09.659Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.336 18:09:09 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:40.336 18:09:09 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:40.336 18:09:09 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74730' 00:19:40.336 18:09:09 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 74730 00:19:40.336 18:09:09 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 74730 00:19:41.275 Remove shared memory files 00:19:41.275 18:09:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:41.275 18:09:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:41.275 18:09:10 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:41.275 18:09:10 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:41.275 18:09:10 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:41.275 18:09:10 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:41.275 18:09:10 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:41.275 18:09:10 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:41.275 00:19:41.275 real 0m23.047s 00:19:41.275 user 0m25.565s 00:19:41.275 sys 0m1.209s 00:19:41.275 18:09:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:41.275 18:09:10 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:41.275 ************************************ 00:19:41.275 END TEST ftl_bdevperf 00:19:41.275 ************************************ 00:19:41.535 18:09:10 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:41.535 18:09:10 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:41.535 18:09:10 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:41.535 18:09:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:41.535 ************************************ 00:19:41.535 START TEST ftl_trim 00:19:41.535 ************************************ 00:19:41.535 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:41.535 * Looking for test storage... 00:19:41.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:41.535 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:41.535 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:19:41.535 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:41.535 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.535 18:09:10 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:41.795 18:09:10 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:41.795 18:09:10 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.795 18:09:10 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:41.795 18:09:10 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.795 18:09:10 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.795 18:09:10 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.795 18:09:10 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:41.795 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.795 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:41.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.795 --rc genhtml_branch_coverage=1 00:19:41.795 --rc genhtml_function_coverage=1 00:19:41.795 --rc genhtml_legend=1 00:19:41.795 --rc geninfo_all_blocks=1 00:19:41.795 --rc geninfo_unexecuted_blocks=1 00:19:41.795 00:19:41.795 ' 00:19:41.795 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:41.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.795 --rc genhtml_branch_coverage=1 00:19:41.795 --rc genhtml_function_coverage=1 00:19:41.795 --rc genhtml_legend=1 00:19:41.795 --rc geninfo_all_blocks=1 00:19:41.795 --rc geninfo_unexecuted_blocks=1 00:19:41.795 00:19:41.795 ' 00:19:41.795 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:41.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.795 --rc genhtml_branch_coverage=1 00:19:41.795 --rc genhtml_function_coverage=1 00:19:41.795 --rc genhtml_legend=1 00:19:41.795 --rc geninfo_all_blocks=1 00:19:41.795 --rc geninfo_unexecuted_blocks=1 00:19:41.795 00:19:41.795 ' 00:19:41.795 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:41.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.795 --rc genhtml_branch_coverage=1 00:19:41.795 --rc genhtml_function_coverage=1 00:19:41.795 --rc genhtml_legend=1 00:19:41.795 --rc geninfo_all_blocks=1 00:19:41.795 --rc geninfo_unexecuted_blocks=1 00:19:41.795 00:19:41.795 ' 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:41.795 18:09:10 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75085 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75085 00:19:41.796 18:09:10 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:41.796 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75085 ']' 00:19:41.796 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.796 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:41.796 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.796 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:41.796 18:09:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:41.796 [2024-11-05 18:09:11.013178] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:19:41.796 [2024-11-05 18:09:11.013517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75085 ] 00:19:42.055 [2024-11-05 18:09:11.194500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:42.055 [2024-11-05 18:09:11.306520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:42.055 [2024-11-05 18:09:11.306656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.055 [2024-11-05 18:09:11.306690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.993 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:42.993 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:19:42.993 18:09:12 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:42.993 18:09:12 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:42.993 18:09:12 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:42.993 18:09:12 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:42.993 18:09:12 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:42.993 18:09:12 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:43.251 18:09:12 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:43.251 18:09:12 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:43.251 18:09:12 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:43.251 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:19:43.251 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:43.251 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:19:43.251 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:19:43.251 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:43.511 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:43.511 { 00:19:43.511 "name": "nvme0n1", 00:19:43.511 "aliases": [ 00:19:43.511 "637cb7bd-5225-4a03-b339-faa1160a768b" 00:19:43.511 ], 00:19:43.511 "product_name": "NVMe disk", 00:19:43.511 "block_size": 4096, 00:19:43.511 "num_blocks": 1310720, 00:19:43.511 "uuid": "637cb7bd-5225-4a03-b339-faa1160a768b", 00:19:43.511 "numa_id": -1, 00:19:43.511 "assigned_rate_limits": { 00:19:43.511 "rw_ios_per_sec": 0, 00:19:43.511 "rw_mbytes_per_sec": 0, 00:19:43.511 "r_mbytes_per_sec": 0, 00:19:43.511 "w_mbytes_per_sec": 0 00:19:43.511 }, 00:19:43.511 "claimed": true, 00:19:43.511 "claim_type": "read_many_write_one", 00:19:43.511 "zoned": false, 00:19:43.511 "supported_io_types": { 00:19:43.511 "read": true, 00:19:43.511 "write": true, 00:19:43.511 "unmap": true, 00:19:43.511 "flush": true, 00:19:43.511 "reset": true, 00:19:43.511 "nvme_admin": true, 00:19:43.511 "nvme_io": true, 00:19:43.511 "nvme_io_md": false, 00:19:43.511 "write_zeroes": true, 00:19:43.511 "zcopy": false, 00:19:43.511 "get_zone_info": false, 00:19:43.511 "zone_management": false, 00:19:43.511 "zone_append": false, 00:19:43.511 "compare": true, 00:19:43.511 "compare_and_write": false, 00:19:43.511 "abort": true, 00:19:43.511 "seek_hole": false, 00:19:43.511 "seek_data": false, 00:19:43.511 "copy": true, 00:19:43.511 "nvme_iov_md": false 00:19:43.511 }, 00:19:43.511 "driver_specific": { 00:19:43.511 "nvme": [ 00:19:43.511 { 00:19:43.511 "pci_address": "0000:00:11.0", 00:19:43.511 "trid": { 00:19:43.511 "trtype": "PCIe", 00:19:43.511 "traddr": "0000:00:11.0" 00:19:43.511 }, 00:19:43.511 "ctrlr_data": { 00:19:43.511 "cntlid": 0, 00:19:43.511 "vendor_id": "0x1b36", 00:19:43.511 "model_number": "QEMU NVMe Ctrl", 00:19:43.511 "serial_number": "12341", 00:19:43.511 "firmware_revision": "8.0.0", 00:19:43.511 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:43.511 "oacs": { 00:19:43.511 "security": 0, 00:19:43.511 "format": 1, 00:19:43.511 "firmware": 0, 00:19:43.511 "ns_manage": 1 00:19:43.511 }, 00:19:43.511 "multi_ctrlr": false, 00:19:43.511 "ana_reporting": false 00:19:43.511 }, 00:19:43.511 "vs": { 00:19:43.511 "nvme_version": "1.4" 00:19:43.511 }, 00:19:43.511 "ns_data": { 00:19:43.511 "id": 1, 00:19:43.511 "can_share": false 00:19:43.511 } 00:19:43.511 } 00:19:43.511 ], 00:19:43.511 "mp_policy": "active_passive" 00:19:43.511 } 00:19:43.511 } 00:19:43.511 ]' 00:19:43.511 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:43.511 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:19:43.511 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:43.511 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:19:43.511 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:19:43.511 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:19:43.511 18:09:12 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:43.512 18:09:12 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:43.512 18:09:12 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:43.512 18:09:12 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:43.512 18:09:12 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:43.771 18:09:12 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=c647eca3-e57f-4223-a44c-6132af154941 00:19:43.771 18:09:12 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:43.771 18:09:12 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c647eca3-e57f-4223-a44c-6132af154941 00:19:44.031 18:09:13 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:44.290 18:09:13 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=dc8cbc65-68c0-4998-8c39-e5227a6a8465 00:19:44.290 18:09:13 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u dc8cbc65-68c0-4998-8c39-e5227a6a8465 00:19:44.290 18:09:13 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=66c27609-1462-4cec-9dd2-626a306a0be8 00:19:44.290 18:09:13 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 66c27609-1462-4cec-9dd2-626a306a0be8 00:19:44.290 18:09:13 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:44.290 18:09:13 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:44.290 18:09:13 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=66c27609-1462-4cec-9dd2-626a306a0be8 00:19:44.290 18:09:13 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:44.290 18:09:13 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 66c27609-1462-4cec-9dd2-626a306a0be8 00:19:44.290 18:09:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=66c27609-1462-4cec-9dd2-626a306a0be8 00:19:44.290 18:09:13 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:44.290 18:09:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:19:44.290 18:09:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:19:44.290 18:09:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 66c27609-1462-4cec-9dd2-626a306a0be8 00:19:44.549 18:09:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:44.549 { 00:19:44.549 "name": "66c27609-1462-4cec-9dd2-626a306a0be8", 00:19:44.549 "aliases": [ 00:19:44.549 "lvs/nvme0n1p0" 00:19:44.549 ], 00:19:44.549 "product_name": "Logical Volume", 00:19:44.549 "block_size": 4096, 00:19:44.549 "num_blocks": 26476544, 00:19:44.549 "uuid": "66c27609-1462-4cec-9dd2-626a306a0be8", 00:19:44.549 "assigned_rate_limits": { 00:19:44.549 "rw_ios_per_sec": 0, 00:19:44.549 "rw_mbytes_per_sec": 0, 00:19:44.549 "r_mbytes_per_sec": 0, 00:19:44.549 "w_mbytes_per_sec": 0 00:19:44.549 }, 00:19:44.549 "claimed": false, 00:19:44.549 "zoned": false, 00:19:44.549 "supported_io_types": { 00:19:44.549 "read": true, 00:19:44.549 "write": true, 00:19:44.549 "unmap": true, 00:19:44.549 "flush": false, 00:19:44.549 "reset": true, 00:19:44.549 "nvme_admin": false, 00:19:44.549 "nvme_io": false, 00:19:44.549 "nvme_io_md": false, 00:19:44.549 "write_zeroes": true, 00:19:44.549 "zcopy": false, 00:19:44.549 "get_zone_info": false, 00:19:44.549 "zone_management": false, 00:19:44.549 "zone_append": false, 00:19:44.549 "compare": false, 00:19:44.549 "compare_and_write": false, 00:19:44.549 "abort": false, 00:19:44.549 "seek_hole": true, 00:19:44.549 "seek_data": true, 00:19:44.549 "copy": false, 00:19:44.549 "nvme_iov_md": false 00:19:44.549 }, 00:19:44.549 "driver_specific": { 00:19:44.549 "lvol": { 00:19:44.549 "lvol_store_uuid": "dc8cbc65-68c0-4998-8c39-e5227a6a8465", 00:19:44.549 "base_bdev": "nvme0n1", 00:19:44.549 "thin_provision": true, 00:19:44.549 "num_allocated_clusters": 0, 00:19:44.549 "snapshot": false, 00:19:44.549 "clone": false, 00:19:44.549 "esnap_clone": false 00:19:44.549 } 00:19:44.549 } 00:19:44.549 } 00:19:44.549 ]' 00:19:44.549 18:09:13 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:44.549 18:09:13 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:19:44.549 18:09:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:44.549 18:09:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:44.549 18:09:13 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:44.549 18:09:13 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:19:44.549 18:09:13 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:44.549 18:09:13 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:44.549 18:09:13 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:44.808 18:09:14 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:44.808 18:09:14 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:44.808 18:09:14 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 66c27609-1462-4cec-9dd2-626a306a0be8 00:19:44.808 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=66c27609-1462-4cec-9dd2-626a306a0be8 00:19:44.808 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:44.808 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:19:44.808 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:19:44.808 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 66c27609-1462-4cec-9dd2-626a306a0be8 00:19:45.068 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:45.068 { 00:19:45.068 "name": "66c27609-1462-4cec-9dd2-626a306a0be8", 00:19:45.068 "aliases": [ 00:19:45.068 "lvs/nvme0n1p0" 00:19:45.068 ], 00:19:45.068 "product_name": "Logical Volume", 00:19:45.068 "block_size": 4096, 00:19:45.068 "num_blocks": 26476544, 00:19:45.068 "uuid": "66c27609-1462-4cec-9dd2-626a306a0be8", 00:19:45.068 "assigned_rate_limits": { 00:19:45.068 "rw_ios_per_sec": 0, 00:19:45.068 "rw_mbytes_per_sec": 0, 00:19:45.068 "r_mbytes_per_sec": 0, 00:19:45.068 "w_mbytes_per_sec": 0 00:19:45.068 }, 00:19:45.068 "claimed": false, 00:19:45.068 "zoned": false, 00:19:45.068 "supported_io_types": { 00:19:45.068 "read": true, 00:19:45.068 "write": true, 00:19:45.068 "unmap": true, 00:19:45.068 "flush": false, 00:19:45.068 "reset": true, 00:19:45.068 "nvme_admin": false, 00:19:45.068 "nvme_io": false, 00:19:45.068 "nvme_io_md": false, 00:19:45.068 "write_zeroes": true, 00:19:45.068 "zcopy": false, 00:19:45.068 "get_zone_info": false, 00:19:45.068 "zone_management": false, 00:19:45.068 "zone_append": false, 00:19:45.068 "compare": false, 00:19:45.068 "compare_and_write": false, 00:19:45.068 "abort": false, 00:19:45.068 "seek_hole": true, 00:19:45.068 "seek_data": true, 00:19:45.068 "copy": false, 00:19:45.068 "nvme_iov_md": false 00:19:45.068 }, 00:19:45.068 "driver_specific": { 00:19:45.068 "lvol": { 00:19:45.068 "lvol_store_uuid": "dc8cbc65-68c0-4998-8c39-e5227a6a8465", 00:19:45.068 "base_bdev": "nvme0n1", 00:19:45.068 "thin_provision": true, 00:19:45.068 "num_allocated_clusters": 0, 00:19:45.068 "snapshot": false, 00:19:45.068 "clone": false, 00:19:45.068 "esnap_clone": false 00:19:45.068 } 00:19:45.068 } 00:19:45.068 } 00:19:45.068 ]' 00:19:45.068 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:45.068 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:19:45.068 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:45.327 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:45.327 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:45.327 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:19:45.327 18:09:14 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:45.327 18:09:14 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:45.327 18:09:14 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:45.327 18:09:14 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:45.327 18:09:14 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 66c27609-1462-4cec-9dd2-626a306a0be8 00:19:45.327 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=66c27609-1462-4cec-9dd2-626a306a0be8 00:19:45.328 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:19:45.328 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:19:45.328 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:19:45.328 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 66c27609-1462-4cec-9dd2-626a306a0be8 00:19:45.586 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:19:45.586 { 00:19:45.586 "name": "66c27609-1462-4cec-9dd2-626a306a0be8", 00:19:45.586 "aliases": [ 00:19:45.586 "lvs/nvme0n1p0" 00:19:45.586 ], 00:19:45.586 "product_name": "Logical Volume", 00:19:45.586 "block_size": 4096, 00:19:45.586 "num_blocks": 26476544, 00:19:45.586 "uuid": "66c27609-1462-4cec-9dd2-626a306a0be8", 00:19:45.586 "assigned_rate_limits": { 00:19:45.586 "rw_ios_per_sec": 0, 00:19:45.586 "rw_mbytes_per_sec": 0, 00:19:45.586 "r_mbytes_per_sec": 0, 00:19:45.586 "w_mbytes_per_sec": 0 00:19:45.586 }, 00:19:45.586 "claimed": false, 00:19:45.586 "zoned": false, 00:19:45.586 "supported_io_types": { 00:19:45.586 "read": true, 00:19:45.586 "write": true, 00:19:45.586 "unmap": true, 00:19:45.586 "flush": false, 00:19:45.586 "reset": true, 00:19:45.586 "nvme_admin": false, 00:19:45.586 "nvme_io": false, 00:19:45.586 "nvme_io_md": false, 00:19:45.586 "write_zeroes": true, 00:19:45.586 "zcopy": false, 00:19:45.586 "get_zone_info": false, 00:19:45.586 "zone_management": false, 00:19:45.586 "zone_append": false, 00:19:45.586 "compare": false, 00:19:45.586 "compare_and_write": false, 00:19:45.586 "abort": false, 00:19:45.586 "seek_hole": true, 00:19:45.586 "seek_data": true, 00:19:45.586 "copy": false, 00:19:45.586 "nvme_iov_md": false 00:19:45.586 }, 00:19:45.586 "driver_specific": { 00:19:45.586 "lvol": { 00:19:45.586 "lvol_store_uuid": "dc8cbc65-68c0-4998-8c39-e5227a6a8465", 00:19:45.586 "base_bdev": "nvme0n1", 00:19:45.586 "thin_provision": true, 00:19:45.586 "num_allocated_clusters": 0, 00:19:45.586 "snapshot": false, 00:19:45.586 "clone": false, 00:19:45.586 "esnap_clone": false 00:19:45.586 } 00:19:45.586 } 00:19:45.586 } 00:19:45.586 ]' 00:19:45.586 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:19:45.587 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:19:45.587 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:19:45.587 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:19:45.587 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:19:45.587 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:19:45.587 18:09:14 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:45.587 18:09:14 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 66c27609-1462-4cec-9dd2-626a306a0be8 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:45.846 [2024-11-05 18:09:15.066584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.846 [2024-11-05 18:09:15.066635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:45.846 [2024-11-05 18:09:15.066672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:45.846 [2024-11-05 18:09:15.066683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.846 [2024-11-05 18:09:15.070099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.846 [2024-11-05 18:09:15.070142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:45.846 [2024-11-05 18:09:15.070158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.349 ms 00:19:45.846 [2024-11-05 18:09:15.070169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.846 [2024-11-05 18:09:15.070312] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:45.846 [2024-11-05 18:09:15.071356] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:45.846 [2024-11-05 18:09:15.071393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.846 [2024-11-05 18:09:15.071405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:45.846 [2024-11-05 18:09:15.071427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:19:45.846 [2024-11-05 18:09:15.071437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.846 [2024-11-05 18:09:15.071583] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 18011f82-9c8c-4643-a4b7-0489ece3fd08 00:19:45.846 [2024-11-05 18:09:15.073021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.846 [2024-11-05 18:09:15.073175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:45.846 [2024-11-05 18:09:15.073195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:45.846 [2024-11-05 18:09:15.073208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.846 [2024-11-05 18:09:15.080763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.846 [2024-11-05 18:09:15.080797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:45.846 [2024-11-05 18:09:15.080814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.410 ms 00:19:45.846 [2024-11-05 18:09:15.080826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.846 [2024-11-05 18:09:15.080992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.846 [2024-11-05 18:09:15.081009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:45.846 [2024-11-05 18:09:15.081021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:19:45.846 [2024-11-05 18:09:15.081038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.846 [2024-11-05 18:09:15.081100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.846 [2024-11-05 18:09:15.081113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:45.846 [2024-11-05 18:09:15.081124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:45.846 [2024-11-05 18:09:15.081137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.846 [2024-11-05 18:09:15.081190] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:45.846 [2024-11-05 18:09:15.086468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.846 [2024-11-05 18:09:15.086501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:45.846 [2024-11-05 18:09:15.086521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.289 ms 00:19:45.846 [2024-11-05 18:09:15.086531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.846 [2024-11-05 18:09:15.086642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.846 [2024-11-05 18:09:15.086654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:45.846 [2024-11-05 18:09:15.086668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:45.846 [2024-11-05 18:09:15.086695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.846 [2024-11-05 18:09:15.086752] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:45.846 [2024-11-05 18:09:15.086885] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:45.846 [2024-11-05 18:09:15.086904] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:45.846 [2024-11-05 18:09:15.086917] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:45.846 [2024-11-05 18:09:15.086934] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:45.846 [2024-11-05 18:09:15.086945] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:45.846 [2024-11-05 18:09:15.086959] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:45.846 [2024-11-05 18:09:15.086969] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:45.846 [2024-11-05 18:09:15.086980] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:45.846 [2024-11-05 18:09:15.086992] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:45.846 [2024-11-05 18:09:15.087005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.846 [2024-11-05 18:09:15.087015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:45.846 [2024-11-05 18:09:15.087029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:45.846 [2024-11-05 18:09:15.087039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.846 [2024-11-05 18:09:15.087143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.846 [2024-11-05 18:09:15.087154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:45.846 [2024-11-05 18:09:15.087167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:45.846 [2024-11-05 18:09:15.087176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.846 [2024-11-05 18:09:15.087337] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:45.846 [2024-11-05 18:09:15.087350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:45.846 [2024-11-05 18:09:15.087362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:45.846 [2024-11-05 18:09:15.087373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.846 [2024-11-05 18:09:15.087384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:45.846 [2024-11-05 18:09:15.087394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:45.846 [2024-11-05 18:09:15.087405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:45.846 [2024-11-05 18:09:15.087414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:45.846 [2024-11-05 18:09:15.087444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:45.846 [2024-11-05 18:09:15.087454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:45.846 [2024-11-05 18:09:15.087466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:45.846 [2024-11-05 18:09:15.087475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:45.846 [2024-11-05 18:09:15.087486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:45.846 [2024-11-05 18:09:15.087496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:45.846 [2024-11-05 18:09:15.087507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:45.846 [2024-11-05 18:09:15.087517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.846 [2024-11-05 18:09:15.087531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:45.846 [2024-11-05 18:09:15.087541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:45.846 [2024-11-05 18:09:15.087552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.846 [2024-11-05 18:09:15.087561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:45.846 [2024-11-05 18:09:15.087591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:45.846 [2024-11-05 18:09:15.087600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.846 [2024-11-05 18:09:15.087611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:45.846 [2024-11-05 18:09:15.087620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:45.846 [2024-11-05 18:09:15.087648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.846 [2024-11-05 18:09:15.087658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:45.846 [2024-11-05 18:09:15.087669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:45.846 [2024-11-05 18:09:15.087678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.846 [2024-11-05 18:09:15.087689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:45.846 [2024-11-05 18:09:15.087698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:45.846 [2024-11-05 18:09:15.087710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.846 [2024-11-05 18:09:15.087719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:45.846 [2024-11-05 18:09:15.087733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:45.846 [2024-11-05 18:09:15.087742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:45.846 [2024-11-05 18:09:15.087754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:45.846 [2024-11-05 18:09:15.087763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:45.846 [2024-11-05 18:09:15.087774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:45.846 [2024-11-05 18:09:15.087783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:45.846 [2024-11-05 18:09:15.087795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:45.846 [2024-11-05 18:09:15.087803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.846 [2024-11-05 18:09:15.087814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:45.846 [2024-11-05 18:09:15.087823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:45.846 [2024-11-05 18:09:15.087835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.846 [2024-11-05 18:09:15.087843] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:45.846 [2024-11-05 18:09:15.087856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:45.846 [2024-11-05 18:09:15.087865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:45.846 [2024-11-05 18:09:15.087877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.846 [2024-11-05 18:09:15.087887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:45.847 [2024-11-05 18:09:15.087903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:45.847 [2024-11-05 18:09:15.087912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:45.847 [2024-11-05 18:09:15.087924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:45.847 [2024-11-05 18:09:15.087933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:45.847 [2024-11-05 18:09:15.087945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:45.847 [2024-11-05 18:09:15.087959] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:45.847 [2024-11-05 18:09:15.087974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:45.847 [2024-11-05 18:09:15.087985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:45.847 [2024-11-05 18:09:15.087998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:45.847 [2024-11-05 18:09:15.088008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:45.847 [2024-11-05 18:09:15.088021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:45.847 [2024-11-05 18:09:15.088031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:45.847 [2024-11-05 18:09:15.088043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:45.847 [2024-11-05 18:09:15.088054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:45.847 [2024-11-05 18:09:15.088066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:45.847 [2024-11-05 18:09:15.088076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:45.847 [2024-11-05 18:09:15.088091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:45.847 [2024-11-05 18:09:15.088102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:45.847 [2024-11-05 18:09:15.088114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:45.847 [2024-11-05 18:09:15.088125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:45.847 [2024-11-05 18:09:15.088137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:45.847 [2024-11-05 18:09:15.088148] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:45.847 [2024-11-05 18:09:15.088169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:45.847 [2024-11-05 18:09:15.088179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:45.847 [2024-11-05 18:09:15.088192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:45.847 [2024-11-05 18:09:15.088202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:45.847 [2024-11-05 18:09:15.088214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:45.847 [2024-11-05 18:09:15.088226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.847 [2024-11-05 18:09:15.088238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:45.847 [2024-11-05 18:09:15.088249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:19:45.847 [2024-11-05 18:09:15.088262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.847 [2024-11-05 18:09:15.088411] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:45.847 [2024-11-05 18:09:15.088443] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:50.042 [2024-11-05 18:09:18.859289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:18.859351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:50.042 [2024-11-05 18:09:18.859368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3776.998 ms 00:19:50.042 [2024-11-05 18:09:18.859399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:18.897943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:18.898000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.042 [2024-11-05 18:09:18.898016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.233 ms 00:19:50.042 [2024-11-05 18:09:18.898029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:18.898206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:18.898222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:50.042 [2024-11-05 18:09:18.898234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:50.042 [2024-11-05 18:09:18.898250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:18.949326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:18.949376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.042 [2024-11-05 18:09:18.949392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.075 ms 00:19:50.042 [2024-11-05 18:09:18.949421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:18.949563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:18.949580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.042 [2024-11-05 18:09:18.949601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:50.042 [2024-11-05 18:09:18.949614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:18.950089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:18.950110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.042 [2024-11-05 18:09:18.950123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:19:50.042 [2024-11-05 18:09:18.950136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:18.950276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:18.950290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.042 [2024-11-05 18:09:18.950302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:19:50.042 [2024-11-05 18:09:18.950317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:18.971234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:18.971278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.042 [2024-11-05 18:09:18.971292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.876 ms 00:19:50.042 [2024-11-05 18:09:18.971320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:18.983543] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:50.042 [2024-11-05 18:09:18.999764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:18.999810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:50.042 [2024-11-05 18:09:18.999827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.308 ms 00:19:50.042 [2024-11-05 18:09:18.999853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:19.103100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:19.103337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:50.042 [2024-11-05 18:09:19.103455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.263 ms 00:19:50.042 [2024-11-05 18:09:19.103494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:19.103790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:19.103839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:50.042 [2024-11-05 18:09:19.103939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:19:50.042 [2024-11-05 18:09:19.103975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:19.139841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:19.140001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:50.042 [2024-11-05 18:09:19.140044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.831 ms 00:19:50.042 [2024-11-05 18:09:19.140054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:19.175355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:19.175531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:50.042 [2024-11-05 18:09:19.175576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.206 ms 00:19:50.042 [2024-11-05 18:09:19.175586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:19.176440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:19.176462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:50.042 [2024-11-05 18:09:19.176476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:19:50.042 [2024-11-05 18:09:19.176487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:19.276093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:19.276134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:50.042 [2024-11-05 18:09:19.276174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.705 ms 00:19:50.042 [2024-11-05 18:09:19.276185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:19.312860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:19.313040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:50.042 [2024-11-05 18:09:19.313082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.570 ms 00:19:50.042 [2024-11-05 18:09:19.313094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.042 [2024-11-05 18:09:19.348017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.042 [2024-11-05 18:09:19.348172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:50.042 [2024-11-05 18:09:19.348213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.848 ms 00:19:50.042 [2024-11-05 18:09:19.348223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.302 [2024-11-05 18:09:19.383787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.302 [2024-11-05 18:09:19.383824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:50.302 [2024-11-05 18:09:19.383840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.463 ms 00:19:50.302 [2024-11-05 18:09:19.383882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.302 [2024-11-05 18:09:19.383997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.302 [2024-11-05 18:09:19.384013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:50.302 [2024-11-05 18:09:19.384030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:50.302 [2024-11-05 18:09:19.384040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.302 [2024-11-05 18:09:19.384149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.302 [2024-11-05 18:09:19.384164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:50.302 [2024-11-05 18:09:19.384177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:50.302 [2024-11-05 18:09:19.384186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.302 [2024-11-05 18:09:19.385428] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:50.302 [2024-11-05 18:09:19.389565] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4325.413 ms, result 0 00:19:50.302 [2024-11-05 18:09:19.390674] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ap{ 00:19:50.302 "name": "ftl0", 00:19:50.302 "uuid": "18011f82-9c8c-4643-a4b7-0489ece3fd08" 00:19:50.302 } 00:19:50.302 p_thread 00:19:50.302 18:09:19 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:50.302 18:09:19 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:19:50.302 18:09:19 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:19:50.302 18:09:19 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:19:50.302 18:09:19 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:19:50.302 18:09:19 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:19:50.302 18:09:19 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:50.302 18:09:19 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:50.561 [ 00:19:50.561 { 00:19:50.561 "name": "ftl0", 00:19:50.561 "aliases": [ 00:19:50.561 "18011f82-9c8c-4643-a4b7-0489ece3fd08" 00:19:50.561 ], 00:19:50.561 "product_name": "FTL disk", 00:19:50.561 "block_size": 4096, 00:19:50.561 "num_blocks": 23592960, 00:19:50.561 "uuid": "18011f82-9c8c-4643-a4b7-0489ece3fd08", 00:19:50.561 "assigned_rate_limits": { 00:19:50.561 "rw_ios_per_sec": 0, 00:19:50.561 "rw_mbytes_per_sec": 0, 00:19:50.561 "r_mbytes_per_sec": 0, 00:19:50.561 "w_mbytes_per_sec": 0 00:19:50.561 }, 00:19:50.561 "claimed": false, 00:19:50.561 "zoned": false, 00:19:50.561 "supported_io_types": { 00:19:50.561 "read": true, 00:19:50.561 "write": true, 00:19:50.561 "unmap": true, 00:19:50.561 "flush": true, 00:19:50.561 "reset": false, 00:19:50.561 "nvme_admin": false, 00:19:50.561 "nvme_io": false, 00:19:50.561 "nvme_io_md": false, 00:19:50.561 "write_zeroes": true, 00:19:50.561 "zcopy": false, 00:19:50.561 "get_zone_info": false, 00:19:50.561 "zone_management": false, 00:19:50.561 "zone_append": false, 00:19:50.561 "compare": false, 00:19:50.561 "compare_and_write": false, 00:19:50.561 "abort": false, 00:19:50.561 "seek_hole": false, 00:19:50.561 "seek_data": false, 00:19:50.561 "copy": false, 00:19:50.561 "nvme_iov_md": false 00:19:50.561 }, 00:19:50.561 "driver_specific": { 00:19:50.561 "ftl": { 00:19:50.561 "base_bdev": "66c27609-1462-4cec-9dd2-626a306a0be8", 00:19:50.561 "cache": "nvc0n1p0" 00:19:50.561 } 00:19:50.561 } 00:19:50.561 } 00:19:50.561 ] 00:19:50.561 18:09:19 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:19:50.561 18:09:19 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:50.561 18:09:19 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:50.820 18:09:20 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:50.820 18:09:20 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:51.079 18:09:20 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:51.079 { 00:19:51.079 "name": "ftl0", 00:19:51.079 "aliases": [ 00:19:51.079 "18011f82-9c8c-4643-a4b7-0489ece3fd08" 00:19:51.079 ], 00:19:51.079 "product_name": "FTL disk", 00:19:51.079 "block_size": 4096, 00:19:51.079 "num_blocks": 23592960, 00:19:51.079 "uuid": "18011f82-9c8c-4643-a4b7-0489ece3fd08", 00:19:51.079 "assigned_rate_limits": { 00:19:51.079 "rw_ios_per_sec": 0, 00:19:51.079 "rw_mbytes_per_sec": 0, 00:19:51.079 "r_mbytes_per_sec": 0, 00:19:51.079 "w_mbytes_per_sec": 0 00:19:51.079 }, 00:19:51.079 "claimed": false, 00:19:51.079 "zoned": false, 00:19:51.079 "supported_io_types": { 00:19:51.079 "read": true, 00:19:51.079 "write": true, 00:19:51.079 "unmap": true, 00:19:51.079 "flush": true, 00:19:51.079 "reset": false, 00:19:51.079 "nvme_admin": false, 00:19:51.079 "nvme_io": false, 00:19:51.079 "nvme_io_md": false, 00:19:51.079 "write_zeroes": true, 00:19:51.079 "zcopy": false, 00:19:51.079 "get_zone_info": false, 00:19:51.079 "zone_management": false, 00:19:51.079 "zone_append": false, 00:19:51.079 "compare": false, 00:19:51.079 "compare_and_write": false, 00:19:51.079 "abort": false, 00:19:51.079 "seek_hole": false, 00:19:51.079 "seek_data": false, 00:19:51.079 "copy": false, 00:19:51.079 "nvme_iov_md": false 00:19:51.079 }, 00:19:51.079 "driver_specific": { 00:19:51.079 "ftl": { 00:19:51.079 "base_bdev": "66c27609-1462-4cec-9dd2-626a306a0be8", 00:19:51.079 "cache": "nvc0n1p0" 00:19:51.079 } 00:19:51.079 } 00:19:51.079 } 00:19:51.079 ]' 00:19:51.079 18:09:20 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:51.079 18:09:20 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:51.079 18:09:20 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:51.340 [2024-11-05 18:09:20.404939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.340 [2024-11-05 18:09:20.404992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:51.340 [2024-11-05 18:09:20.405011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:51.340 [2024-11-05 18:09:20.405027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.340 [2024-11-05 18:09:20.405100] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:51.340 [2024-11-05 18:09:20.409235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.340 [2024-11-05 18:09:20.409431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:51.340 [2024-11-05 18:09:20.409463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.120 ms 00:19:51.340 [2024-11-05 18:09:20.409473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.340 [2024-11-05 18:09:20.410557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.340 [2024-11-05 18:09:20.410580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:51.340 [2024-11-05 18:09:20.410595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:19:51.340 [2024-11-05 18:09:20.410606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.340 [2024-11-05 18:09:20.413480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.340 [2024-11-05 18:09:20.413504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:51.340 [2024-11-05 18:09:20.413518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.824 ms 00:19:51.340 [2024-11-05 18:09:20.413528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.340 [2024-11-05 18:09:20.419155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.340 [2024-11-05 18:09:20.419189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:51.340 [2024-11-05 18:09:20.419203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.555 ms 00:19:51.340 [2024-11-05 18:09:20.419229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.340 [2024-11-05 18:09:20.454796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.340 [2024-11-05 18:09:20.454836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:51.340 [2024-11-05 18:09:20.454856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.497 ms 00:19:51.340 [2024-11-05 18:09:20.454882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.340 [2024-11-05 18:09:20.476197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.340 [2024-11-05 18:09:20.476235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:51.340 [2024-11-05 18:09:20.476251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.245 ms 00:19:51.340 [2024-11-05 18:09:20.476281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.340 [2024-11-05 18:09:20.476663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.340 [2024-11-05 18:09:20.476678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:51.340 [2024-11-05 18:09:20.476692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:19:51.340 [2024-11-05 18:09:20.476704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.340 [2024-11-05 18:09:20.511993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.340 [2024-11-05 18:09:20.512155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:51.340 [2024-11-05 18:09:20.512198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.294 ms 00:19:51.340 [2024-11-05 18:09:20.512208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.340 [2024-11-05 18:09:20.547265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.340 [2024-11-05 18:09:20.547303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:51.340 [2024-11-05 18:09:20.547321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.929 ms 00:19:51.340 [2024-11-05 18:09:20.547347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.340 [2024-11-05 18:09:20.582683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.340 [2024-11-05 18:09:20.582722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:51.340 [2024-11-05 18:09:20.582738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.259 ms 00:19:51.340 [2024-11-05 18:09:20.582763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.340 [2024-11-05 18:09:20.617437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.340 [2024-11-05 18:09:20.617472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:51.340 [2024-11-05 18:09:20.617488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.519 ms 00:19:51.340 [2024-11-05 18:09:20.617513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.340 [2024-11-05 18:09:20.617638] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:51.340 [2024-11-05 18:09:20.617662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.617994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.618007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:51.340 [2024-11-05 18:09:20.618017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:51.341 [2024-11-05 18:09:20.618913] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:51.341 [2024-11-05 18:09:20.618928] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 18011f82-9c8c-4643-a4b7-0489ece3fd08 00:19:51.341 [2024-11-05 18:09:20.618939] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:51.341 [2024-11-05 18:09:20.618951] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:51.341 [2024-11-05 18:09:20.618961] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:51.341 [2024-11-05 18:09:20.618973] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:51.341 [2024-11-05 18:09:20.618986] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:51.341 [2024-11-05 18:09:20.618998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:51.341 [2024-11-05 18:09:20.619008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:51.341 [2024-11-05 18:09:20.619019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:51.341 [2024-11-05 18:09:20.619028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:51.341 [2024-11-05 18:09:20.619040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.341 [2024-11-05 18:09:20.619050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:51.341 [2024-11-05 18:09:20.619063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.406 ms 00:19:51.341 [2024-11-05 18:09:20.619072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.341 [2024-11-05 18:09:20.638315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.341 [2024-11-05 18:09:20.638349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:51.341 [2024-11-05 18:09:20.638369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.205 ms 00:19:51.341 [2024-11-05 18:09:20.638396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.341 [2024-11-05 18:09:20.639023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.341 [2024-11-05 18:09:20.639042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:51.341 [2024-11-05 18:09:20.639056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:19:51.341 [2024-11-05 18:09:20.639066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.601 [2024-11-05 18:09:20.706647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.601 [2024-11-05 18:09:20.706686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.601 [2024-11-05 18:09:20.706702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.601 [2024-11-05 18:09:20.706712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.601 [2024-11-05 18:09:20.706830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.601 [2024-11-05 18:09:20.706843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.601 [2024-11-05 18:09:20.706856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.601 [2024-11-05 18:09:20.706866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.601 [2024-11-05 18:09:20.706956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.601 [2024-11-05 18:09:20.706969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.601 [2024-11-05 18:09:20.706988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.601 [2024-11-05 18:09:20.706997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.601 [2024-11-05 18:09:20.707058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.601 [2024-11-05 18:09:20.707069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.601 [2024-11-05 18:09:20.707082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.601 [2024-11-05 18:09:20.707092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.601 [2024-11-05 18:09:20.835202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.602 [2024-11-05 18:09:20.835398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.602 [2024-11-05 18:09:20.835436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.602 [2024-11-05 18:09:20.835448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.861 [2024-11-05 18:09:20.931372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.861 [2024-11-05 18:09:20.931437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.861 [2024-11-05 18:09:20.931470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.861 [2024-11-05 18:09:20.931481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.861 [2024-11-05 18:09:20.931595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.861 [2024-11-05 18:09:20.931608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.861 [2024-11-05 18:09:20.931641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.861 [2024-11-05 18:09:20.931654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.861 [2024-11-05 18:09:20.931763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.861 [2024-11-05 18:09:20.931775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.861 [2024-11-05 18:09:20.931788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.861 [2024-11-05 18:09:20.931798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.861 [2024-11-05 18:09:20.931973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.861 [2024-11-05 18:09:20.931987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.861 [2024-11-05 18:09:20.932000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.861 [2024-11-05 18:09:20.932010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.861 [2024-11-05 18:09:20.932097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.861 [2024-11-05 18:09:20.932109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:51.861 [2024-11-05 18:09:20.932122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.861 [2024-11-05 18:09:20.932133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.861 [2024-11-05 18:09:20.932219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.861 [2024-11-05 18:09:20.932230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.861 [2024-11-05 18:09:20.932245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.861 [2024-11-05 18:09:20.932255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.861 [2024-11-05 18:09:20.932331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.861 [2024-11-05 18:09:20.932343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.861 [2024-11-05 18:09:20.932356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.861 [2024-11-05 18:09:20.932366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.861 [2024-11-05 18:09:20.933316] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.225 ms, result 0 00:19:51.861 true 00:19:51.861 18:09:20 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75085 00:19:51.861 18:09:20 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75085 ']' 00:19:51.861 18:09:20 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75085 00:19:51.861 18:09:20 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:19:51.861 18:09:20 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.861 18:09:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75085 00:19:51.861 killing process with pid 75085 00:19:51.861 18:09:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:51.861 18:09:20 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:51.861 18:09:20 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75085' 00:19:51.861 18:09:20 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75085 00:19:51.861 18:09:20 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75085 00:19:57.195 18:09:25 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:57.454 65536+0 records in 00:19:57.454 65536+0 records out 00:19:57.454 268435456 bytes (268 MB, 256 MiB) copied, 0.953821 s, 281 MB/s 00:19:57.454 18:09:26 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:57.714 [2024-11-05 18:09:26.783834] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:19:57.714 [2024-11-05 18:09:26.783948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75290 ] 00:19:57.714 [2024-11-05 18:09:26.960210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.973 [2024-11-05 18:09:27.059564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.232 [2024-11-05 18:09:27.404824] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.232 [2024-11-05 18:09:27.404892] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.493 [2024-11-05 18:09:27.565988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.493 [2024-11-05 18:09:27.566033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:58.493 [2024-11-05 18:09:27.566049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:58.493 [2024-11-05 18:09:27.566075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.493 [2024-11-05 18:09:27.569370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.493 [2024-11-05 18:09:27.569543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.493 [2024-11-05 18:09:27.569680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.279 ms 00:19:58.493 [2024-11-05 18:09:27.569719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.493 [2024-11-05 18:09:27.569847] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:58.493 [2024-11-05 18:09:27.571020] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:58.493 [2024-11-05 18:09:27.571168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.493 [2024-11-05 18:09:27.571245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.493 [2024-11-05 18:09:27.571281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.331 ms 00:19:58.493 [2024-11-05 18:09:27.571311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.493 [2024-11-05 18:09:27.572951] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:58.493 [2024-11-05 18:09:27.591565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.493 [2024-11-05 18:09:27.591722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:58.493 [2024-11-05 18:09:27.591853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.645 ms 00:19:58.493 [2024-11-05 18:09:27.591870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.493 [2024-11-05 18:09:27.591961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.493 [2024-11-05 18:09:27.591976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:58.493 [2024-11-05 18:09:27.591987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:58.493 [2024-11-05 18:09:27.591997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.493 [2024-11-05 18:09:27.598712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.493 [2024-11-05 18:09:27.598740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.493 [2024-11-05 18:09:27.598751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.686 ms 00:19:58.493 [2024-11-05 18:09:27.598776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.493 [2024-11-05 18:09:27.598879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.493 [2024-11-05 18:09:27.598893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.493 [2024-11-05 18:09:27.598903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:58.493 [2024-11-05 18:09:27.598912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.493 [2024-11-05 18:09:27.598937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.493 [2024-11-05 18:09:27.598951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:58.493 [2024-11-05 18:09:27.598961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:58.493 [2024-11-05 18:09:27.598970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.493 [2024-11-05 18:09:27.598991] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:58.493 [2024-11-05 18:09:27.603787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.493 [2024-11-05 18:09:27.603817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.493 [2024-11-05 18:09:27.603828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.808 ms 00:19:58.493 [2024-11-05 18:09:27.603854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.493 [2024-11-05 18:09:27.603918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.493 [2024-11-05 18:09:27.603930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:58.493 [2024-11-05 18:09:27.603940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:58.493 [2024-11-05 18:09:27.603950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.493 [2024-11-05 18:09:27.603969] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:58.493 [2024-11-05 18:09:27.603994] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:58.493 [2024-11-05 18:09:27.604028] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:58.493 [2024-11-05 18:09:27.604045] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:58.493 [2024-11-05 18:09:27.604130] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:58.493 [2024-11-05 18:09:27.604143] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:58.493 [2024-11-05 18:09:27.604156] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:58.493 [2024-11-05 18:09:27.604168] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:58.493 [2024-11-05 18:09:27.604183] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:58.493 [2024-11-05 18:09:27.604194] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:58.493 [2024-11-05 18:09:27.604204] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:58.493 [2024-11-05 18:09:27.604214] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:58.493 [2024-11-05 18:09:27.604223] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:58.493 [2024-11-05 18:09:27.604233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.493 [2024-11-05 18:09:27.604243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:58.493 [2024-11-05 18:09:27.604253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:19:58.493 [2024-11-05 18:09:27.604262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.493 [2024-11-05 18:09:27.604335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.494 [2024-11-05 18:09:27.604346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:58.494 [2024-11-05 18:09:27.604359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:58.494 [2024-11-05 18:09:27.604368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.494 [2024-11-05 18:09:27.604468] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:58.494 [2024-11-05 18:09:27.604482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:58.494 [2024-11-05 18:09:27.604492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.494 [2024-11-05 18:09:27.604503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.494 [2024-11-05 18:09:27.604513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:58.494 [2024-11-05 18:09:27.604522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:58.494 [2024-11-05 18:09:27.604531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:58.494 [2024-11-05 18:09:27.604542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:58.494 [2024-11-05 18:09:27.604552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:58.494 [2024-11-05 18:09:27.604562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.494 [2024-11-05 18:09:27.604571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:58.494 [2024-11-05 18:09:27.604580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:58.494 [2024-11-05 18:09:27.604589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.494 [2024-11-05 18:09:27.604609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:58.494 [2024-11-05 18:09:27.604618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:58.494 [2024-11-05 18:09:27.604627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.494 [2024-11-05 18:09:27.604636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:58.494 [2024-11-05 18:09:27.604645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:58.494 [2024-11-05 18:09:27.604654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.494 [2024-11-05 18:09:27.604663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:58.494 [2024-11-05 18:09:27.604672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:58.494 [2024-11-05 18:09:27.604681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.494 [2024-11-05 18:09:27.604690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:58.494 [2024-11-05 18:09:27.604699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:58.494 [2024-11-05 18:09:27.604710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.494 [2024-11-05 18:09:27.604719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:58.494 [2024-11-05 18:09:27.604728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:58.494 [2024-11-05 18:09:27.604736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.494 [2024-11-05 18:09:27.604745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:58.494 [2024-11-05 18:09:27.604754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:58.494 [2024-11-05 18:09:27.604762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.494 [2024-11-05 18:09:27.604771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:58.494 [2024-11-05 18:09:27.604780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:58.494 [2024-11-05 18:09:27.604788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.494 [2024-11-05 18:09:27.604797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:58.494 [2024-11-05 18:09:27.604806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:58.494 [2024-11-05 18:09:27.604814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.494 [2024-11-05 18:09:27.604823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:58.494 [2024-11-05 18:09:27.604832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:58.494 [2024-11-05 18:09:27.604840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.494 [2024-11-05 18:09:27.604848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:58.494 [2024-11-05 18:09:27.604860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:58.494 [2024-11-05 18:09:27.604869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.494 [2024-11-05 18:09:27.604878] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:58.494 [2024-11-05 18:09:27.604887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:58.494 [2024-11-05 18:09:27.604897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.494 [2024-11-05 18:09:27.604911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.494 [2024-11-05 18:09:27.604921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:58.494 [2024-11-05 18:09:27.604930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:58.494 [2024-11-05 18:09:27.604939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:58.494 [2024-11-05 18:09:27.604948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:58.494 [2024-11-05 18:09:27.604956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:58.494 [2024-11-05 18:09:27.604965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:58.494 [2024-11-05 18:09:27.604992] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:58.494 [2024-11-05 18:09:27.605004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.494 [2024-11-05 18:09:27.605015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:58.494 [2024-11-05 18:09:27.605025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:58.494 [2024-11-05 18:09:27.605036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:58.494 [2024-11-05 18:09:27.605046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:58.494 [2024-11-05 18:09:27.605056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:58.494 [2024-11-05 18:09:27.605066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:58.494 [2024-11-05 18:09:27.605076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:58.494 [2024-11-05 18:09:27.605086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:58.494 [2024-11-05 18:09:27.605096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:58.494 [2024-11-05 18:09:27.605107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:58.494 [2024-11-05 18:09:27.605118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:58.494 [2024-11-05 18:09:27.605128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:58.494 [2024-11-05 18:09:27.605138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:58.494 [2024-11-05 18:09:27.605148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:58.494 [2024-11-05 18:09:27.605159] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:58.494 [2024-11-05 18:09:27.605169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.494 [2024-11-05 18:09:27.605180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:58.494 [2024-11-05 18:09:27.605190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:58.494 [2024-11-05 18:09:27.605201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:58.494 [2024-11-05 18:09:27.605211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:58.494 [2024-11-05 18:09:27.605222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.494 [2024-11-05 18:09:27.605233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:58.494 [2024-11-05 18:09:27.605247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:19:58.494 [2024-11-05 18:09:27.605256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.494 [2024-11-05 18:09:27.640594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.494 [2024-11-05 18:09:27.640634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:58.494 [2024-11-05 18:09:27.640647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.344 ms 00:19:58.494 [2024-11-05 18:09:27.640657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.494 [2024-11-05 18:09:27.640770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.494 [2024-11-05 18:09:27.640788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:58.494 [2024-11-05 18:09:27.640799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:58.494 [2024-11-05 18:09:27.640809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.494 [2024-11-05 18:09:27.715987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.494 [2024-11-05 18:09:27.716154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:58.494 [2024-11-05 18:09:27.716202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.279 ms 00:19:58.494 [2024-11-05 18:09:27.716217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.494 [2024-11-05 18:09:27.716319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.494 [2024-11-05 18:09:27.716332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:58.494 [2024-11-05 18:09:27.716344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:58.494 [2024-11-05 18:09:27.716354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.494 [2024-11-05 18:09:27.716958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.494 [2024-11-05 18:09:27.717058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:58.494 [2024-11-05 18:09:27.717075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:19:58.495 [2024-11-05 18:09:27.717091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.495 [2024-11-05 18:09:27.717212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.495 [2024-11-05 18:09:27.717225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:58.495 [2024-11-05 18:09:27.717236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:58.495 [2024-11-05 18:09:27.717247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.495 [2024-11-05 18:09:27.736496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.495 [2024-11-05 18:09:27.736530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:58.495 [2024-11-05 18:09:27.736543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.258 ms 00:19:58.495 [2024-11-05 18:09:27.736552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.495 [2024-11-05 18:09:27.754779] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:58.495 [2024-11-05 18:09:27.754934] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:58.495 [2024-11-05 18:09:27.754954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.495 [2024-11-05 18:09:27.754965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:58.495 [2024-11-05 18:09:27.754976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.330 ms 00:19:58.495 [2024-11-05 18:09:27.754985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.495 [2024-11-05 18:09:27.783200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.495 [2024-11-05 18:09:27.783238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:58.495 [2024-11-05 18:09:27.783262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.123 ms 00:19:58.495 [2024-11-05 18:09:27.783272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.495 [2024-11-05 18:09:27.800513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.495 [2024-11-05 18:09:27.800550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:58.495 [2024-11-05 18:09:27.800562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.194 ms 00:19:58.495 [2024-11-05 18:09:27.800587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.754 [2024-11-05 18:09:27.817580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.754 [2024-11-05 18:09:27.817722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:58.754 [2024-11-05 18:09:27.817806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.938 ms 00:19:58.754 [2024-11-05 18:09:27.817840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.754 [2024-11-05 18:09:27.818587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.754 [2024-11-05 18:09:27.818706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:58.754 [2024-11-05 18:09:27.818782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:19:58.754 [2024-11-05 18:09:27.818818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.754 [2024-11-05 18:09:27.898851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.754 [2024-11-05 18:09:27.899066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:58.754 [2024-11-05 18:09:27.899185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.023 ms 00:19:58.754 [2024-11-05 18:09:27.899224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.754 [2024-11-05 18:09:27.909417] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:58.754 [2024-11-05 18:09:27.924801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.754 [2024-11-05 18:09:27.924840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:58.754 [2024-11-05 18:09:27.924854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.472 ms 00:19:58.754 [2024-11-05 18:09:27.924881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.754 [2024-11-05 18:09:27.924985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.754 [2024-11-05 18:09:27.925002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:58.754 [2024-11-05 18:09:27.925013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:58.754 [2024-11-05 18:09:27.925024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.754 [2024-11-05 18:09:27.925076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.754 [2024-11-05 18:09:27.925088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:58.754 [2024-11-05 18:09:27.925098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:58.754 [2024-11-05 18:09:27.925108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.754 [2024-11-05 18:09:27.925134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.754 [2024-11-05 18:09:27.925145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:58.754 [2024-11-05 18:09:27.925157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:58.754 [2024-11-05 18:09:27.925167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.754 [2024-11-05 18:09:27.925202] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:58.754 [2024-11-05 18:09:27.925213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.754 [2024-11-05 18:09:27.925223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:58.754 [2024-11-05 18:09:27.925233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:58.754 [2024-11-05 18:09:27.925242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.755 [2024-11-05 18:09:27.960951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.755 [2024-11-05 18:09:27.960997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:58.755 [2024-11-05 18:09:27.961011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.746 ms 00:19:58.755 [2024-11-05 18:09:27.961022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.755 [2024-11-05 18:09:27.961134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.755 [2024-11-05 18:09:27.961148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:58.755 [2024-11-05 18:09:27.961159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:58.755 [2024-11-05 18:09:27.961169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.755 [2024-11-05 18:09:27.962114] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:58.755 [2024-11-05 18:09:27.966420] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 396.481 ms, result 0 00:19:58.755 [2024-11-05 18:09:27.967261] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:58.755 [2024-11-05 18:09:27.985714] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:59.694  [2024-11-05T18:09:30.396Z] Copying: 21/256 [MB] (21 MBps) [2024-11-05T18:09:31.334Z] Copying: 43/256 [MB] (22 MBps) [2024-11-05T18:09:32.272Z] Copying: 66/256 [MB] (22 MBps) [2024-11-05T18:09:33.210Z] Copying: 89/256 [MB] (23 MBps) [2024-11-05T18:09:34.148Z] Copying: 112/256 [MB] (22 MBps) [2024-11-05T18:09:35.086Z] Copying: 134/256 [MB] (22 MBps) [2024-11-05T18:09:36.024Z] Copying: 155/256 [MB] (21 MBps) [2024-11-05T18:09:37.403Z] Copying: 176/256 [MB] (21 MBps) [2024-11-05T18:09:37.972Z] Copying: 197/256 [MB] (21 MBps) [2024-11-05T18:09:39.351Z] Copying: 220/256 [MB] (22 MBps) [2024-11-05T18:09:39.610Z] Copying: 243/256 [MB] (22 MBps) [2024-11-05T18:09:39.610Z] Copying: 256/256 [MB] (average 22 MBps)[2024-11-05 18:09:39.565731] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:10.287 [2024-11-05 18:09:39.580107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.287 [2024-11-05 18:09:39.580146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:10.287 [2024-11-05 18:09:39.580160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:10.287 [2024-11-05 18:09:39.580186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.287 [2024-11-05 18:09:39.580208] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:10.287 [2024-11-05 18:09:39.584327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.287 [2024-11-05 18:09:39.584363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:10.287 [2024-11-05 18:09:39.584374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.111 ms 00:20:10.287 [2024-11-05 18:09:39.584400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.287 [2024-11-05 18:09:39.586614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.287 [2024-11-05 18:09:39.586777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:10.287 [2024-11-05 18:09:39.586799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.194 ms 00:20:10.287 [2024-11-05 18:09:39.586810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.287 [2024-11-05 18:09:39.593156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.287 [2024-11-05 18:09:39.593189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:10.287 [2024-11-05 18:09:39.593206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.331 ms 00:20:10.287 [2024-11-05 18:09:39.593232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.287 [2024-11-05 18:09:39.598567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.287 [2024-11-05 18:09:39.598603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:10.287 [2024-11-05 18:09:39.598614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.307 ms 00:20:10.287 [2024-11-05 18:09:39.598624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.547 [2024-11-05 18:09:39.633091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.547 [2024-11-05 18:09:39.633240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:10.547 [2024-11-05 18:09:39.633259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.464 ms 00:20:10.547 [2024-11-05 18:09:39.633269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.547 [2024-11-05 18:09:39.653446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.547 [2024-11-05 18:09:39.653479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:10.547 [2024-11-05 18:09:39.653498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.128 ms 00:20:10.547 [2024-11-05 18:09:39.653528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.547 [2024-11-05 18:09:39.653659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.547 [2024-11-05 18:09:39.653672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:10.547 [2024-11-05 18:09:39.653682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:10.547 [2024-11-05 18:09:39.653692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.547 [2024-11-05 18:09:39.688113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.547 [2024-11-05 18:09:39.688150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:10.547 [2024-11-05 18:09:39.688162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.459 ms 00:20:10.547 [2024-11-05 18:09:39.688188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.547 [2024-11-05 18:09:39.721822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.547 [2024-11-05 18:09:39.721971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:10.547 [2024-11-05 18:09:39.721990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.636 ms 00:20:10.547 [2024-11-05 18:09:39.722000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.547 [2024-11-05 18:09:39.755675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.547 [2024-11-05 18:09:39.755711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:10.547 [2024-11-05 18:09:39.755723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.621 ms 00:20:10.547 [2024-11-05 18:09:39.755748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.547 [2024-11-05 18:09:39.789184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.547 [2024-11-05 18:09:39.789218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:10.547 [2024-11-05 18:09:39.789230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.414 ms 00:20:10.547 [2024-11-05 18:09:39.789239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.547 [2024-11-05 18:09:39.789288] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:10.547 [2024-11-05 18:09:39.789310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.789998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.790008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.790018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:10.547 [2024-11-05 18:09:39.790029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:10.548 [2024-11-05 18:09:39.790368] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:10.548 [2024-11-05 18:09:39.790378] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 18011f82-9c8c-4643-a4b7-0489ece3fd08 00:20:10.548 [2024-11-05 18:09:39.790388] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:10.548 [2024-11-05 18:09:39.790399] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:10.548 [2024-11-05 18:09:39.790419] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:10.548 [2024-11-05 18:09:39.790429] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:10.548 [2024-11-05 18:09:39.790438] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:10.548 [2024-11-05 18:09:39.790447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:10.548 [2024-11-05 18:09:39.790457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:10.548 [2024-11-05 18:09:39.790465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:10.548 [2024-11-05 18:09:39.790474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:10.548 [2024-11-05 18:09:39.790483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.548 [2024-11-05 18:09:39.790492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:10.548 [2024-11-05 18:09:39.790506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.198 ms 00:20:10.548 [2024-11-05 18:09:39.790516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.548 [2024-11-05 18:09:39.809499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.548 [2024-11-05 18:09:39.809530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:10.548 [2024-11-05 18:09:39.809542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.995 ms 00:20:10.548 [2024-11-05 18:09:39.809551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.548 [2024-11-05 18:09:39.810046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.548 [2024-11-05 18:09:39.810062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:10.548 [2024-11-05 18:09:39.810072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:20:10.548 [2024-11-05 18:09:39.810082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.548 [2024-11-05 18:09:39.860134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.548 [2024-11-05 18:09:39.860171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:10.548 [2024-11-05 18:09:39.860184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.548 [2024-11-05 18:09:39.860210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.548 [2024-11-05 18:09:39.860295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.548 [2024-11-05 18:09:39.860310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:10.548 [2024-11-05 18:09:39.860320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.548 [2024-11-05 18:09:39.860330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.548 [2024-11-05 18:09:39.860378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.548 [2024-11-05 18:09:39.860390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:10.548 [2024-11-05 18:09:39.860400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.548 [2024-11-05 18:09:39.860409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.548 [2024-11-05 18:09:39.860446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.548 [2024-11-05 18:09:39.860458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:10.548 [2024-11-05 18:09:39.860472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.548 [2024-11-05 18:09:39.860481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.808 [2024-11-05 18:09:39.974033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.808 [2024-11-05 18:09:39.974226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:10.808 [2024-11-05 18:09:39.974263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.808 [2024-11-05 18:09:39.974275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.808 [2024-11-05 18:09:40.072780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.808 [2024-11-05 18:09:40.072828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:10.808 [2024-11-05 18:09:40.072847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.808 [2024-11-05 18:09:40.072857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.808 [2024-11-05 18:09:40.072922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.808 [2024-11-05 18:09:40.072933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:10.808 [2024-11-05 18:09:40.072942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.808 [2024-11-05 18:09:40.072952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.808 [2024-11-05 18:09:40.072979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.808 [2024-11-05 18:09:40.072989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:10.808 [2024-11-05 18:09:40.072999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.808 [2024-11-05 18:09:40.073012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.808 [2024-11-05 18:09:40.073111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.808 [2024-11-05 18:09:40.073123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:10.808 [2024-11-05 18:09:40.073132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.808 [2024-11-05 18:09:40.073142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.808 [2024-11-05 18:09:40.073177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.808 [2024-11-05 18:09:40.073189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:10.808 [2024-11-05 18:09:40.073199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.808 [2024-11-05 18:09:40.073208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.808 [2024-11-05 18:09:40.073247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.808 [2024-11-05 18:09:40.073258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:10.808 [2024-11-05 18:09:40.073267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.808 [2024-11-05 18:09:40.073276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.808 [2024-11-05 18:09:40.073317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.808 [2024-11-05 18:09:40.073328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:10.808 [2024-11-05 18:09:40.073338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.808 [2024-11-05 18:09:40.073350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.808 [2024-11-05 18:09:40.073523] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 494.204 ms, result 0 00:20:11.747 00:20:11.747 00:20:12.006 18:09:41 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=75442 00:20:12.006 18:09:41 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:12.006 18:09:41 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 75442 00:20:12.006 18:09:41 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75442 ']' 00:20:12.006 18:09:41 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.006 18:09:41 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:12.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.006 18:09:41 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.006 18:09:41 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:12.006 18:09:41 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:12.006 [2024-11-05 18:09:41.213702] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:12.006 [2024-11-05 18:09:41.213826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75442 ] 00:20:12.266 [2024-11-05 18:09:41.393948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.266 [2024-11-05 18:09:41.492868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.219 18:09:42 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:13.219 18:09:42 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:20:13.219 18:09:42 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:13.529 [2024-11-05 18:09:42.550502] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.529 [2024-11-05 18:09:42.550561] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.529 [2024-11-05 18:09:42.730459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.529 [2024-11-05 18:09:42.730673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:13.529 [2024-11-05 18:09:42.730776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:13.529 [2024-11-05 18:09:42.730817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.529 [2024-11-05 18:09:42.734530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.529 [2024-11-05 18:09:42.734683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.529 [2024-11-05 18:09:42.734815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.668 ms 00:20:13.529 [2024-11-05 18:09:42.734852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.529 [2024-11-05 18:09:42.734982] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:13.529 [2024-11-05 18:09:42.735917] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:13.529 [2024-11-05 18:09:42.736073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.529 [2024-11-05 18:09:42.736150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.529 [2024-11-05 18:09:42.736189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.102 ms 00:20:13.529 [2024-11-05 18:09:42.736219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.529 [2024-11-05 18:09:42.738057] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:13.529 [2024-11-05 18:09:42.757372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.529 [2024-11-05 18:09:42.757559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:13.529 [2024-11-05 18:09:42.757738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.350 ms 00:20:13.529 [2024-11-05 18:09:42.757765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.529 [2024-11-05 18:09:42.757861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.529 [2024-11-05 18:09:42.757881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:13.529 [2024-11-05 18:09:42.757893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:13.529 [2024-11-05 18:09:42.757908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.529 [2024-11-05 18:09:42.764654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.529 [2024-11-05 18:09:42.764797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.529 [2024-11-05 18:09:42.764832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.704 ms 00:20:13.529 [2024-11-05 18:09:42.764845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.529 [2024-11-05 18:09:42.764957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.529 [2024-11-05 18:09:42.764974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.529 [2024-11-05 18:09:42.764985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:13.529 [2024-11-05 18:09:42.764998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.529 [2024-11-05 18:09:42.765031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.529 [2024-11-05 18:09:42.765045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:13.529 [2024-11-05 18:09:42.765055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:13.529 [2024-11-05 18:09:42.765067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.529 [2024-11-05 18:09:42.765090] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:13.529 [2024-11-05 18:09:42.769822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.529 [2024-11-05 18:09:42.769853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.529 [2024-11-05 18:09:42.769867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.742 ms 00:20:13.530 [2024-11-05 18:09:42.769877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.530 [2024-11-05 18:09:42.769946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.530 [2024-11-05 18:09:42.769958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:13.530 [2024-11-05 18:09:42.769971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:13.530 [2024-11-05 18:09:42.769984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.530 [2024-11-05 18:09:42.770008] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:13.530 [2024-11-05 18:09:42.770028] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:13.530 [2024-11-05 18:09:42.770072] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:13.530 [2024-11-05 18:09:42.770092] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:13.530 [2024-11-05 18:09:42.770181] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:13.530 [2024-11-05 18:09:42.770194] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:13.530 [2024-11-05 18:09:42.770212] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:13.530 [2024-11-05 18:09:42.770227] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:13.530 [2024-11-05 18:09:42.770242] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:13.530 [2024-11-05 18:09:42.770253] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:13.530 [2024-11-05 18:09:42.770265] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:13.530 [2024-11-05 18:09:42.770275] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:13.530 [2024-11-05 18:09:42.770296] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:13.530 [2024-11-05 18:09:42.770307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.530 [2024-11-05 18:09:42.770321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:13.530 [2024-11-05 18:09:42.770332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:20:13.530 [2024-11-05 18:09:42.770346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.530 [2024-11-05 18:09:42.770442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.530 [2024-11-05 18:09:42.770459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:13.530 [2024-11-05 18:09:42.770470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:13.530 [2024-11-05 18:09:42.770484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.530 [2024-11-05 18:09:42.770578] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:13.530 [2024-11-05 18:09:42.770598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:13.530 [2024-11-05 18:09:42.770609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:13.530 [2024-11-05 18:09:42.770625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.530 [2024-11-05 18:09:42.770636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:13.530 [2024-11-05 18:09:42.770650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:13.530 [2024-11-05 18:09:42.770659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:13.530 [2024-11-05 18:09:42.770681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:13.530 [2024-11-05 18:09:42.770692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:13.530 [2024-11-05 18:09:42.770706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:13.530 [2024-11-05 18:09:42.770716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:13.530 [2024-11-05 18:09:42.770731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:13.530 [2024-11-05 18:09:42.770741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:13.530 [2024-11-05 18:09:42.770755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:13.530 [2024-11-05 18:09:42.770765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:13.530 [2024-11-05 18:09:42.770790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.530 [2024-11-05 18:09:42.770799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:13.530 [2024-11-05 18:09:42.770813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:13.530 [2024-11-05 18:09:42.770822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.530 [2024-11-05 18:09:42.770835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:13.530 [2024-11-05 18:09:42.770854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:13.530 [2024-11-05 18:09:42.770868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.530 [2024-11-05 18:09:42.770878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:13.530 [2024-11-05 18:09:42.770895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:13.530 [2024-11-05 18:09:42.770904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.530 [2024-11-05 18:09:42.770917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:13.530 [2024-11-05 18:09:42.770926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:13.530 [2024-11-05 18:09:42.770939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.530 [2024-11-05 18:09:42.770948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:13.530 [2024-11-05 18:09:42.770961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:13.530 [2024-11-05 18:09:42.770970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:13.530 [2024-11-05 18:09:42.770990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:13.530 [2024-11-05 18:09:42.770999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:13.530 [2024-11-05 18:09:42.771012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:13.530 [2024-11-05 18:09:42.771021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:13.530 [2024-11-05 18:09:42.771035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:13.530 [2024-11-05 18:09:42.771043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:13.530 [2024-11-05 18:09:42.771059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:13.530 [2024-11-05 18:09:42.771068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:13.530 [2024-11-05 18:09:42.771085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.530 [2024-11-05 18:09:42.771095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:13.530 [2024-11-05 18:09:42.771108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:13.530 [2024-11-05 18:09:42.771117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.530 [2024-11-05 18:09:42.771132] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:13.530 [2024-11-05 18:09:42.771145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:13.530 [2024-11-05 18:09:42.771164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:13.530 [2024-11-05 18:09:42.771175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.530 [2024-11-05 18:09:42.771189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:13.530 [2024-11-05 18:09:42.771199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:13.530 [2024-11-05 18:09:42.771212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:13.530 [2024-11-05 18:09:42.771222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:13.530 [2024-11-05 18:09:42.771235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:13.530 [2024-11-05 18:09:42.771244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:13.530 [2024-11-05 18:09:42.771259] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:13.530 [2024-11-05 18:09:42.771271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:13.530 [2024-11-05 18:09:42.771292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:13.530 [2024-11-05 18:09:42.771302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:13.530 [2024-11-05 18:09:42.771316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:13.530 [2024-11-05 18:09:42.771326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:13.530 [2024-11-05 18:09:42.771344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:13.530 [2024-11-05 18:09:42.771354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:13.530 [2024-11-05 18:09:42.771369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:13.530 [2024-11-05 18:09:42.771379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:13.530 [2024-11-05 18:09:42.771393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:13.530 [2024-11-05 18:09:42.771403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:13.530 [2024-11-05 18:09:42.771432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:13.530 [2024-11-05 18:09:42.771442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:13.530 [2024-11-05 18:09:42.771457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:13.530 [2024-11-05 18:09:42.771468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:13.530 [2024-11-05 18:09:42.771481] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:13.530 [2024-11-05 18:09:42.771492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:13.530 [2024-11-05 18:09:42.771513] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:13.530 [2024-11-05 18:09:42.771523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:13.531 [2024-11-05 18:09:42.771537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:13.531 [2024-11-05 18:09:42.771548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:13.531 [2024-11-05 18:09:42.771564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.531 [2024-11-05 18:09:42.771575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:13.531 [2024-11-05 18:09:42.771590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:20:13.531 [2024-11-05 18:09:42.771599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.531 [2024-11-05 18:09:42.808728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.531 [2024-11-05 18:09:42.808766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.531 [2024-11-05 18:09:42.808785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.122 ms 00:20:13.531 [2024-11-05 18:09:42.808813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.531 [2024-11-05 18:09:42.808935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.531 [2024-11-05 18:09:42.808950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:13.531 [2024-11-05 18:09:42.808966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:13.531 [2024-11-05 18:09:42.808976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.790 [2024-11-05 18:09:42.857059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.790 [2024-11-05 18:09:42.857096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.790 [2024-11-05 18:09:42.857136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.130 ms 00:20:13.790 [2024-11-05 18:09:42.857147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.790 [2024-11-05 18:09:42.857232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:42.857244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.791 [2024-11-05 18:09:42.857260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:13.791 [2024-11-05 18:09:42.857270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.791 [2024-11-05 18:09:42.857737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:42.857752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.791 [2024-11-05 18:09:42.857773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:20:13.791 [2024-11-05 18:09:42.857784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.791 [2024-11-05 18:09:42.857907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:42.857921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.791 [2024-11-05 18:09:42.857936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:13.791 [2024-11-05 18:09:42.857946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.791 [2024-11-05 18:09:42.878806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:42.878843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.791 [2024-11-05 18:09:42.878862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.864 ms 00:20:13.791 [2024-11-05 18:09:42.878873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.791 [2024-11-05 18:09:42.897194] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:13.791 [2024-11-05 18:09:42.897233] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:13.791 [2024-11-05 18:09:42.897253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:42.897264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:13.791 [2024-11-05 18:09:42.897279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.300 ms 00:20:13.791 [2024-11-05 18:09:42.897289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.791 [2024-11-05 18:09:42.925801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:42.925839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:13.791 [2024-11-05 18:09:42.925858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.471 ms 00:20:13.791 [2024-11-05 18:09:42.925884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.791 [2024-11-05 18:09:42.943496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:42.943531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:13.791 [2024-11-05 18:09:42.943553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.553 ms 00:20:13.791 [2024-11-05 18:09:42.943562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.791 [2024-11-05 18:09:42.961014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:42.961050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:13.791 [2024-11-05 18:09:42.961067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.403 ms 00:20:13.791 [2024-11-05 18:09:42.961077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.791 [2024-11-05 18:09:42.961876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:42.961903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:13.791 [2024-11-05 18:09:42.961917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:20:13.791 [2024-11-05 18:09:42.961928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.791 [2024-11-05 18:09:43.073532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:43.073594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:13.791 [2024-11-05 18:09:43.073633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.748 ms 00:20:13.791 [2024-11-05 18:09:43.073644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.791 [2024-11-05 18:09:43.083831] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:13.791 [2024-11-05 18:09:43.099162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:43.099219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:13.791 [2024-11-05 18:09:43.099241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.440 ms 00:20:13.791 [2024-11-05 18:09:43.099256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.791 [2024-11-05 18:09:43.099345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:43.099363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:13.791 [2024-11-05 18:09:43.099374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:13.791 [2024-11-05 18:09:43.099388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.791 [2024-11-05 18:09:43.099475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:43.099495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:13.791 [2024-11-05 18:09:43.099506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:13.791 [2024-11-05 18:09:43.099522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.791 [2024-11-05 18:09:43.099552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:43.099568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:13.791 [2024-11-05 18:09:43.099579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:13.791 [2024-11-05 18:09:43.099594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.791 [2024-11-05 18:09:43.099636] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:13.791 [2024-11-05 18:09:43.099658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.791 [2024-11-05 18:09:43.099670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:13.791 [2024-11-05 18:09:43.099691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:13.791 [2024-11-05 18:09:43.099702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.051 [2024-11-05 18:09:43.135700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.051 [2024-11-05 18:09:43.135745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.051 [2024-11-05 18:09:43.135765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.015 ms 00:20:14.051 [2024-11-05 18:09:43.135776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.051 [2024-11-05 18:09:43.135895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.051 [2024-11-05 18:09:43.135909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.051 [2024-11-05 18:09:43.135925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:14.051 [2024-11-05 18:09:43.135940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.051 [2024-11-05 18:09:43.136867] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.051 [2024-11-05 18:09:43.141055] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 406.759 ms, result 0 00:20:14.051 [2024-11-05 18:09:43.142401] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.051 Some configs were skipped because the RPC state that can call them passed over. 00:20:14.051 18:09:43 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:14.311 [2024-11-05 18:09:43.389429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.311 [2024-11-05 18:09:43.389598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:14.311 [2024-11-05 18:09:43.389688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.685 ms 00:20:14.311 [2024-11-05 18:09:43.389737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.311 [2024-11-05 18:09:43.389809] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.058 ms, result 0 00:20:14.311 true 00:20:14.311 18:09:43 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:14.311 [2024-11-05 18:09:43.592946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.311 [2024-11-05 18:09:43.592991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:14.311 [2024-11-05 18:09:43.593011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.311 ms 00:20:14.311 [2024-11-05 18:09:43.593022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.311 [2024-11-05 18:09:43.593068] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.438 ms, result 0 00:20:14.311 true 00:20:14.311 18:09:43 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 75442 00:20:14.311 18:09:43 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75442 ']' 00:20:14.311 18:09:43 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75442 00:20:14.311 18:09:43 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:20:14.311 18:09:43 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:14.311 18:09:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75442 00:20:14.570 18:09:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:14.570 18:09:43 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:14.570 18:09:43 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75442' 00:20:14.570 killing process with pid 75442 00:20:14.570 18:09:43 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75442 00:20:14.570 18:09:43 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75442 00:20:15.510 [2024-11-05 18:09:44.701894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.510 [2024-11-05 18:09:44.702187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.510 [2024-11-05 18:09:44.702285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:15.510 [2024-11-05 18:09:44.702327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.510 [2024-11-05 18:09:44.702388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:15.510 [2024-11-05 18:09:44.706387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.510 [2024-11-05 18:09:44.706556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.510 [2024-11-05 18:09:44.706690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.908 ms 00:20:15.510 [2024-11-05 18:09:44.706707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.510 [2024-11-05 18:09:44.706967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.510 [2024-11-05 18:09:44.706982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.510 [2024-11-05 18:09:44.706996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:20:15.510 [2024-11-05 18:09:44.707007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.510 [2024-11-05 18:09:44.710227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.510 [2024-11-05 18:09:44.710264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.510 [2024-11-05 18:09:44.710282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.202 ms 00:20:15.510 [2024-11-05 18:09:44.710293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.510 [2024-11-05 18:09:44.715658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.510 [2024-11-05 18:09:44.715691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.510 [2024-11-05 18:09:44.715705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.332 ms 00:20:15.510 [2024-11-05 18:09:44.715715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.510 [2024-11-05 18:09:44.729642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.510 [2024-11-05 18:09:44.729684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.510 [2024-11-05 18:09:44.729702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.891 ms 00:20:15.510 [2024-11-05 18:09:44.729721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.510 [2024-11-05 18:09:44.740182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.510 [2024-11-05 18:09:44.740218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.510 [2024-11-05 18:09:44.740236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.408 ms 00:20:15.510 [2024-11-05 18:09:44.740247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.510 [2024-11-05 18:09:44.740385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.510 [2024-11-05 18:09:44.740399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.510 [2024-11-05 18:09:44.740425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:15.510 [2024-11-05 18:09:44.740435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.510 [2024-11-05 18:09:44.755622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.510 [2024-11-05 18:09:44.755657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.510 [2024-11-05 18:09:44.755673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.190 ms 00:20:15.510 [2024-11-05 18:09:44.755683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.510 [2024-11-05 18:09:44.769876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.510 [2024-11-05 18:09:44.770050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.510 [2024-11-05 18:09:44.770084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.158 ms 00:20:15.510 [2024-11-05 18:09:44.770095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.510 [2024-11-05 18:09:44.784042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.510 [2024-11-05 18:09:44.784201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.510 [2024-11-05 18:09:44.784232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.908 ms 00:20:15.510 [2024-11-05 18:09:44.784242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.510 [2024-11-05 18:09:44.797977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.510 [2024-11-05 18:09:44.798135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.510 [2024-11-05 18:09:44.798163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.670 ms 00:20:15.510 [2024-11-05 18:09:44.798174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.510 [2024-11-05 18:09:44.798251] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.510 [2024-11-05 18:09:44.798270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.510 [2024-11-05 18:09:44.798619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.798987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.511 [2024-11-05 18:09:44.799659] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.511 [2024-11-05 18:09:44.799676] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 18011f82-9c8c-4643-a4b7-0489ece3fd08 00:20:15.511 [2024-11-05 18:09:44.799695] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.511 [2024-11-05 18:09:44.799710] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.511 [2024-11-05 18:09:44.799719] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.511 [2024-11-05 18:09:44.799732] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.511 [2024-11-05 18:09:44.799741] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.511 [2024-11-05 18:09:44.799757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.511 [2024-11-05 18:09:44.799766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.511 [2024-11-05 18:09:44.799779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.511 [2024-11-05 18:09:44.799788] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.511 [2024-11-05 18:09:44.799802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.511 [2024-11-05 18:09:44.799812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.511 [2024-11-05 18:09:44.799826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.556 ms 00:20:15.511 [2024-11-05 18:09:44.799852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.511 [2024-11-05 18:09:44.818030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.511 [2024-11-05 18:09:44.818062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.511 [2024-11-05 18:09:44.818084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.172 ms 00:20:15.512 [2024-11-05 18:09:44.818094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.512 [2024-11-05 18:09:44.818564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.512 [2024-11-05 18:09:44.818578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.512 [2024-11-05 18:09:44.818593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:20:15.512 [2024-11-05 18:09:44.818608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.771 [2024-11-05 18:09:44.884025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.771 [2024-11-05 18:09:44.884060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.771 [2024-11-05 18:09:44.884078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.771 [2024-11-05 18:09:44.884088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.771 [2024-11-05 18:09:44.884169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.771 [2024-11-05 18:09:44.884182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.771 [2024-11-05 18:09:44.884197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.771 [2024-11-05 18:09:44.884212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.771 [2024-11-05 18:09:44.884263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.771 [2024-11-05 18:09:44.884276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.771 [2024-11-05 18:09:44.884296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.771 [2024-11-05 18:09:44.884306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.771 [2024-11-05 18:09:44.884329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.771 [2024-11-05 18:09:44.884339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.771 [2024-11-05 18:09:44.884354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.771 [2024-11-05 18:09:44.884364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.771 [2024-11-05 18:09:45.001316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.771 [2024-11-05 18:09:45.001535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.771 [2024-11-05 18:09:45.001564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.771 [2024-11-05 18:09:45.001575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.030 [2024-11-05 18:09:45.096944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.030 [2024-11-05 18:09:45.097133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.030 [2024-11-05 18:09:45.097162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.030 [2024-11-05 18:09:45.097178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.030 [2024-11-05 18:09:45.097258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.031 [2024-11-05 18:09:45.097270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.031 [2024-11-05 18:09:45.097292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.031 [2024-11-05 18:09:45.097303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.031 [2024-11-05 18:09:45.097336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.031 [2024-11-05 18:09:45.097347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.031 [2024-11-05 18:09:45.097362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.031 [2024-11-05 18:09:45.097373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.031 [2024-11-05 18:09:45.097551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.031 [2024-11-05 18:09:45.097567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.031 [2024-11-05 18:09:45.097584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.031 [2024-11-05 18:09:45.097596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.031 [2024-11-05 18:09:45.097643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.031 [2024-11-05 18:09:45.097664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:16.031 [2024-11-05 18:09:45.097680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.031 [2024-11-05 18:09:45.097691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.031 [2024-11-05 18:09:45.097737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.031 [2024-11-05 18:09:45.097754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.031 [2024-11-05 18:09:45.097774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.031 [2024-11-05 18:09:45.097784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.031 [2024-11-05 18:09:45.097832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.031 [2024-11-05 18:09:45.097845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.031 [2024-11-05 18:09:45.097860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.031 [2024-11-05 18:09:45.097873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.031 [2024-11-05 18:09:45.098016] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 396.732 ms, result 0 00:20:16.969 18:09:46 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:16.969 18:09:46 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:16.969 [2024-11-05 18:09:46.145911] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:16.969 [2024-11-05 18:09:46.146043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75500 ] 00:20:17.229 [2024-11-05 18:09:46.320605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.229 [2024-11-05 18:09:46.427296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.489 [2024-11-05 18:09:46.746591] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.489 [2024-11-05 18:09:46.746655] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.749 [2024-11-05 18:09:46.907242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-11-05 18:09:46.907290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:17.749 [2024-11-05 18:09:46.907306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:17.749 [2024-11-05 18:09:46.907316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-11-05 18:09:46.910147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-11-05 18:09:46.910187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.749 [2024-11-05 18:09:46.910199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.815 ms 00:20:17.749 [2024-11-05 18:09:46.910209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-11-05 18:09:46.910298] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:17.749 [2024-11-05 18:09:46.911186] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:17.749 [2024-11-05 18:09:46.911231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-11-05 18:09:46.911242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.749 [2024-11-05 18:09:46.911252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:20:17.749 [2024-11-05 18:09:46.911262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-11-05 18:09:46.912735] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:17.749 [2024-11-05 18:09:46.930861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-11-05 18:09:46.930901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:17.749 [2024-11-05 18:09:46.930914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.156 ms 00:20:17.749 [2024-11-05 18:09:46.930925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-11-05 18:09:46.931017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-11-05 18:09:46.931031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:17.749 [2024-11-05 18:09:46.931043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:17.749 [2024-11-05 18:09:46.931053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-11-05 18:09:46.937688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-11-05 18:09:46.937715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.749 [2024-11-05 18:09:46.937726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.607 ms 00:20:17.749 [2024-11-05 18:09:46.937736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-11-05 18:09:46.937826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-11-05 18:09:46.937840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.749 [2024-11-05 18:09:46.937851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:17.749 [2024-11-05 18:09:46.937861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-11-05 18:09:46.937887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-11-05 18:09:46.937903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:17.749 [2024-11-05 18:09:46.937914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:17.749 [2024-11-05 18:09:46.937924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-11-05 18:09:46.937945] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:17.749 [2024-11-05 18:09:46.942514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-11-05 18:09:46.942548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.749 [2024-11-05 18:09:46.942560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.581 ms 00:20:17.749 [2024-11-05 18:09:46.942570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-11-05 18:09:46.942631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-11-05 18:09:46.942643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:17.750 [2024-11-05 18:09:46.942654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:17.750 [2024-11-05 18:09:46.942665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.750 [2024-11-05 18:09:46.942683] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:17.750 [2024-11-05 18:09:46.942709] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:17.750 [2024-11-05 18:09:46.942743] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:17.750 [2024-11-05 18:09:46.942759] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:17.750 [2024-11-05 18:09:46.942843] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:17.750 [2024-11-05 18:09:46.942856] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:17.750 [2024-11-05 18:09:46.942869] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:17.750 [2024-11-05 18:09:46.942882] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:17.750 [2024-11-05 18:09:46.942899] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:17.750 [2024-11-05 18:09:46.942911] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:17.750 [2024-11-05 18:09:46.942921] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:17.750 [2024-11-05 18:09:46.942930] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:17.750 [2024-11-05 18:09:46.942940] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:17.750 [2024-11-05 18:09:46.942950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.750 [2024-11-05 18:09:46.942961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:17.750 [2024-11-05 18:09:46.942971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:20:17.750 [2024-11-05 18:09:46.942982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.750 [2024-11-05 18:09:46.943052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.750 [2024-11-05 18:09:46.943064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:17.750 [2024-11-05 18:09:46.943078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:17.750 [2024-11-05 18:09:46.943088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.750 [2024-11-05 18:09:46.943168] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:17.750 [2024-11-05 18:09:46.943188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:17.750 [2024-11-05 18:09:46.943199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.750 [2024-11-05 18:09:46.943209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.750 [2024-11-05 18:09:46.943220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:17.750 [2024-11-05 18:09:46.943231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:17.750 [2024-11-05 18:09:46.943241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:17.750 [2024-11-05 18:09:46.943250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:17.750 [2024-11-05 18:09:46.943259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:17.750 [2024-11-05 18:09:46.943268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.750 [2024-11-05 18:09:46.943278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:17.750 [2024-11-05 18:09:46.943287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:17.750 [2024-11-05 18:09:46.943296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.750 [2024-11-05 18:09:46.943315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:17.750 [2024-11-05 18:09:46.943324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:17.750 [2024-11-05 18:09:46.943332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.750 [2024-11-05 18:09:46.943341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:17.750 [2024-11-05 18:09:46.943350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:17.750 [2024-11-05 18:09:46.943359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.750 [2024-11-05 18:09:46.943369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:17.750 [2024-11-05 18:09:46.943378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:17.750 [2024-11-05 18:09:46.943387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.750 [2024-11-05 18:09:46.943396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:17.750 [2024-11-05 18:09:46.943404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:17.750 [2024-11-05 18:09:46.943430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.750 [2024-11-05 18:09:46.943438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:17.750 [2024-11-05 18:09:46.943448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:17.750 [2024-11-05 18:09:46.943457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.750 [2024-11-05 18:09:46.943466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:17.750 [2024-11-05 18:09:46.943475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:17.750 [2024-11-05 18:09:46.943485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.750 [2024-11-05 18:09:46.943493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:17.750 [2024-11-05 18:09:46.943503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:17.750 [2024-11-05 18:09:46.943512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.750 [2024-11-05 18:09:46.943521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:17.750 [2024-11-05 18:09:46.943529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:17.750 [2024-11-05 18:09:46.943538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.750 [2024-11-05 18:09:46.943547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:17.750 [2024-11-05 18:09:46.943555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:17.750 [2024-11-05 18:09:46.943564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.750 [2024-11-05 18:09:46.943573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:17.750 [2024-11-05 18:09:46.943582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:17.750 [2024-11-05 18:09:46.943592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.750 [2024-11-05 18:09:46.943601] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:17.750 [2024-11-05 18:09:46.943610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:17.750 [2024-11-05 18:09:46.943619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.750 [2024-11-05 18:09:46.943632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.750 [2024-11-05 18:09:46.943641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:17.750 [2024-11-05 18:09:46.943651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:17.750 [2024-11-05 18:09:46.943661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:17.750 [2024-11-05 18:09:46.943670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:17.750 [2024-11-05 18:09:46.943679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:17.750 [2024-11-05 18:09:46.943688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:17.750 [2024-11-05 18:09:46.943698] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:17.750 [2024-11-05 18:09:46.943710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.750 [2024-11-05 18:09:46.943721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:17.750 [2024-11-05 18:09:46.943732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:17.750 [2024-11-05 18:09:46.943741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:17.750 [2024-11-05 18:09:46.943752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:17.750 [2024-11-05 18:09:46.943762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:17.750 [2024-11-05 18:09:46.943772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:17.750 [2024-11-05 18:09:46.943783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:17.751 [2024-11-05 18:09:46.943793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:17.751 [2024-11-05 18:09:46.943803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:17.751 [2024-11-05 18:09:46.943813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:17.751 [2024-11-05 18:09:46.943823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:17.751 [2024-11-05 18:09:46.943832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:17.751 [2024-11-05 18:09:46.943842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:17.751 [2024-11-05 18:09:46.943851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:17.751 [2024-11-05 18:09:46.943862] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:17.751 [2024-11-05 18:09:46.943873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.751 [2024-11-05 18:09:46.943884] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:17.751 [2024-11-05 18:09:46.943893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:17.751 [2024-11-05 18:09:46.943903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:17.751 [2024-11-05 18:09:46.943915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:17.751 [2024-11-05 18:09:46.943925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.751 [2024-11-05 18:09:46.943934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:17.751 [2024-11-05 18:09:46.943948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:20:17.751 [2024-11-05 18:09:46.943958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.751 [2024-11-05 18:09:46.980618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.751 [2024-11-05 18:09:46.980654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.751 [2024-11-05 18:09:46.980667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.670 ms 00:20:17.751 [2024-11-05 18:09:46.980677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.751 [2024-11-05 18:09:46.980787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.751 [2024-11-05 18:09:46.980800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:17.751 [2024-11-05 18:09:46.980811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:17.751 [2024-11-05 18:09:46.980822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.751 [2024-11-05 18:09:47.054617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.751 [2024-11-05 18:09:47.054874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.751 [2024-11-05 18:09:47.054903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.893 ms 00:20:17.751 [2024-11-05 18:09:47.054915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.751 [2024-11-05 18:09:47.055017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.751 [2024-11-05 18:09:47.055031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.751 [2024-11-05 18:09:47.055043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:17.751 [2024-11-05 18:09:47.055053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.751 [2024-11-05 18:09:47.055527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.751 [2024-11-05 18:09:47.055543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.751 [2024-11-05 18:09:47.055554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:20:17.751 [2024-11-05 18:09:47.055570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.751 [2024-11-05 18:09:47.055686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.751 [2024-11-05 18:09:47.055700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.751 [2024-11-05 18:09:47.055712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:17.751 [2024-11-05 18:09:47.055722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.074943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.011 [2024-11-05 18:09:47.074977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.011 [2024-11-05 18:09:47.074991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.229 ms 00:20:18.011 [2024-11-05 18:09:47.075002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.093528] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:18.011 [2024-11-05 18:09:47.093756] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:18.011 [2024-11-05 18:09:47.093777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.011 [2024-11-05 18:09:47.093790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:18.011 [2024-11-05 18:09:47.093802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.699 ms 00:20:18.011 [2024-11-05 18:09:47.093813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.121366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.011 [2024-11-05 18:09:47.121553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:18.011 [2024-11-05 18:09:47.121576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.517 ms 00:20:18.011 [2024-11-05 18:09:47.121589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.138026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.011 [2024-11-05 18:09:47.138061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:18.011 [2024-11-05 18:09:47.138074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.373 ms 00:20:18.011 [2024-11-05 18:09:47.138083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.154602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.011 [2024-11-05 18:09:47.154636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:18.011 [2024-11-05 18:09:47.154648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.473 ms 00:20:18.011 [2024-11-05 18:09:47.154658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.155342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.011 [2024-11-05 18:09:47.155360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:18.011 [2024-11-05 18:09:47.155371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:20:18.011 [2024-11-05 18:09:47.155381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.233827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.011 [2024-11-05 18:09:47.233884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:18.011 [2024-11-05 18:09:47.233901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.508 ms 00:20:18.011 [2024-11-05 18:09:47.233912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.243744] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:18.011 [2024-11-05 18:09:47.259169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.011 [2024-11-05 18:09:47.259207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:18.011 [2024-11-05 18:09:47.259222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.208 ms 00:20:18.011 [2024-11-05 18:09:47.259239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.259345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.011 [2024-11-05 18:09:47.259359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:18.011 [2024-11-05 18:09:47.259371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:18.011 [2024-11-05 18:09:47.259381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.259448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.011 [2024-11-05 18:09:47.259461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:18.011 [2024-11-05 18:09:47.259471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:18.011 [2024-11-05 18:09:47.259485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.259513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.011 [2024-11-05 18:09:47.259524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:18.011 [2024-11-05 18:09:47.259535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:18.011 [2024-11-05 18:09:47.259544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.259576] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:18.011 [2024-11-05 18:09:47.259589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.011 [2024-11-05 18:09:47.259598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:18.011 [2024-11-05 18:09:47.259624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:18.011 [2024-11-05 18:09:47.259635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.294897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.011 [2024-11-05 18:09:47.294937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:18.011 [2024-11-05 18:09:47.294950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.295 ms 00:20:18.011 [2024-11-05 18:09:47.294961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.295069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.011 [2024-11-05 18:09:47.295084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:18.011 [2024-11-05 18:09:47.295095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:18.011 [2024-11-05 18:09:47.295105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.011 [2024-11-05 18:09:47.296040] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.011 [2024-11-05 18:09:47.299912] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.126 ms, result 0 00:20:18.012 [2024-11-05 18:09:47.300817] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:18.012 [2024-11-05 18:09:47.318217] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.392  [2024-11-05T18:09:49.652Z] Copying: 26/256 [MB] (26 MBps) [2024-11-05T18:09:50.589Z] Copying: 49/256 [MB] (23 MBps) [2024-11-05T18:09:51.537Z] Copying: 73/256 [MB] (23 MBps) [2024-11-05T18:09:52.478Z] Copying: 97/256 [MB] (23 MBps) [2024-11-05T18:09:53.417Z] Copying: 121/256 [MB] (24 MBps) [2024-11-05T18:09:54.358Z] Copying: 145/256 [MB] (24 MBps) [2024-11-05T18:09:55.737Z] Copying: 169/256 [MB] (23 MBps) [2024-11-05T18:09:56.674Z] Copying: 193/256 [MB] (23 MBps) [2024-11-05T18:09:57.611Z] Copying: 217/256 [MB] (23 MBps) [2024-11-05T18:09:58.183Z] Copying: 241/256 [MB] (23 MBps) [2024-11-05T18:09:58.183Z] Copying: 256/256 [MB] (average 24 MBps)[2024-11-05 18:09:57.957240] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:28.860 [2024-11-05 18:09:57.970943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.860 [2024-11-05 18:09:57.970983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:28.860 [2024-11-05 18:09:57.970997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:28.860 [2024-11-05 18:09:57.971018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.860 [2024-11-05 18:09:57.971039] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:28.860 [2024-11-05 18:09:57.975022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.860 [2024-11-05 18:09:57.975063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:28.860 [2024-11-05 18:09:57.975075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.974 ms 00:20:28.860 [2024-11-05 18:09:57.975085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.860 [2024-11-05 18:09:57.975287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.860 [2024-11-05 18:09:57.975300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:28.860 [2024-11-05 18:09:57.975311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:20:28.860 [2024-11-05 18:09:57.975320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.860 [2024-11-05 18:09:57.977966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.860 [2024-11-05 18:09:57.977996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:28.860 [2024-11-05 18:09:57.978006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.635 ms 00:20:28.860 [2024-11-05 18:09:57.978016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.860 [2024-11-05 18:09:57.983170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.860 [2024-11-05 18:09:57.983202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:28.860 [2024-11-05 18:09:57.983213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.145 ms 00:20:28.860 [2024-11-05 18:09:57.983222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.860 [2024-11-05 18:09:58.016863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.860 [2024-11-05 18:09:58.016900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:28.860 [2024-11-05 18:09:58.016913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.636 ms 00:20:28.860 [2024-11-05 18:09:58.016922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.860 [2024-11-05 18:09:58.036130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.860 [2024-11-05 18:09:58.036168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:28.860 [2024-11-05 18:09:58.036193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.186 ms 00:20:28.860 [2024-11-05 18:09:58.036203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.860 [2024-11-05 18:09:58.036323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.860 [2024-11-05 18:09:58.036337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:28.860 [2024-11-05 18:09:58.036348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:28.860 [2024-11-05 18:09:58.036358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.860 [2024-11-05 18:09:58.070857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.860 [2024-11-05 18:09:58.070893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:28.860 [2024-11-05 18:09:58.070905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.522 ms 00:20:28.860 [2024-11-05 18:09:58.070914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.860 [2024-11-05 18:09:58.105001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.860 [2024-11-05 18:09:58.105038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:28.860 [2024-11-05 18:09:58.105050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.091 ms 00:20:28.860 [2024-11-05 18:09:58.105059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.860 [2024-11-05 18:09:58.138072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.860 [2024-11-05 18:09:58.138108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:28.860 [2024-11-05 18:09:58.138120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.016 ms 00:20:28.860 [2024-11-05 18:09:58.138129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.860 [2024-11-05 18:09:58.171192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.860 [2024-11-05 18:09:58.171227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:28.860 [2024-11-05 18:09:58.171239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.040 ms 00:20:28.860 [2024-11-05 18:09:58.171249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.860 [2024-11-05 18:09:58.171300] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:28.860 [2024-11-05 18:09:58.171317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:28.860 [2024-11-05 18:09:58.171519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.171997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:28.861 [2024-11-05 18:09:58.172356] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:28.861 [2024-11-05 18:09:58.172365] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 18011f82-9c8c-4643-a4b7-0489ece3fd08 00:20:28.861 [2024-11-05 18:09:58.172375] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:28.861 [2024-11-05 18:09:58.172385] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:28.861 [2024-11-05 18:09:58.172395] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:28.861 [2024-11-05 18:09:58.172405] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:28.861 [2024-11-05 18:09:58.172424] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:28.861 [2024-11-05 18:09:58.172434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:28.861 [2024-11-05 18:09:58.172451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:28.861 [2024-11-05 18:09:58.172460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:28.861 [2024-11-05 18:09:58.172468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:28.862 [2024-11-05 18:09:58.172477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.862 [2024-11-05 18:09:58.172487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:28.862 [2024-11-05 18:09:58.172498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.180 ms 00:20:28.862 [2024-11-05 18:09:58.172508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.171 [2024-11-05 18:09:58.191146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.171 [2024-11-05 18:09:58.191179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:29.171 [2024-11-05 18:09:58.191191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.649 ms 00:20:29.171 [2024-11-05 18:09:58.191200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.171 [2024-11-05 18:09:58.191725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.171 [2024-11-05 18:09:58.191744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:29.171 [2024-11-05 18:09:58.191755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:20:29.171 [2024-11-05 18:09:58.191764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.171 [2024-11-05 18:09:58.242473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.171 [2024-11-05 18:09:58.242509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.171 [2024-11-05 18:09:58.242521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.171 [2024-11-05 18:09:58.242531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.171 [2024-11-05 18:09:58.242612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.171 [2024-11-05 18:09:58.242624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.171 [2024-11-05 18:09:58.242634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.171 [2024-11-05 18:09:58.242644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.171 [2024-11-05 18:09:58.242692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.171 [2024-11-05 18:09:58.242705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.171 [2024-11-05 18:09:58.242715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.171 [2024-11-05 18:09:58.242725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.171 [2024-11-05 18:09:58.242750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.171 [2024-11-05 18:09:58.242761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.171 [2024-11-05 18:09:58.242771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.171 [2024-11-05 18:09:58.242781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.171 [2024-11-05 18:09:58.358476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.171 [2024-11-05 18:09:58.358526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.171 [2024-11-05 18:09:58.358540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.171 [2024-11-05 18:09:58.358551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.171 [2024-11-05 18:09:58.452360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.171 [2024-11-05 18:09:58.452421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.171 [2024-11-05 18:09:58.452435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.171 [2024-11-05 18:09:58.452445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.171 [2024-11-05 18:09:58.452500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.171 [2024-11-05 18:09:58.452512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.171 [2024-11-05 18:09:58.452523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.171 [2024-11-05 18:09:58.452533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.171 [2024-11-05 18:09:58.452561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.171 [2024-11-05 18:09:58.452578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.171 [2024-11-05 18:09:58.452589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.171 [2024-11-05 18:09:58.452600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.171 [2024-11-05 18:09:58.452695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.171 [2024-11-05 18:09:58.452709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.171 [2024-11-05 18:09:58.452720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.171 [2024-11-05 18:09:58.452730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.171 [2024-11-05 18:09:58.452764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.171 [2024-11-05 18:09:58.452776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:29.171 [2024-11-05 18:09:58.452791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.172 [2024-11-05 18:09:58.452800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-11-05 18:09:58.452837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.172 [2024-11-05 18:09:58.452848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.172 [2024-11-05 18:09:58.452858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.172 [2024-11-05 18:09:58.452868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-11-05 18:09:58.452909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:29.172 [2024-11-05 18:09:58.452925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.172 [2024-11-05 18:09:58.452935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:29.172 [2024-11-05 18:09:58.452944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.172 [2024-11-05 18:09:58.453079] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 482.902 ms, result 0 00:20:30.110 00:20:30.110 00:20:30.369 18:09:59 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:30.369 18:09:59 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:30.627 18:09:59 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:30.886 [2024-11-05 18:09:59.996932] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:30.886 [2024-11-05 18:09:59.997080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75654 ] 00:20:30.886 [2024-11-05 18:10:00.198236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.146 [2024-11-05 18:10:00.303368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.405 [2024-11-05 18:10:00.641061] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.405 [2024-11-05 18:10:00.641128] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.666 [2024-11-05 18:10:00.802146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.666 [2024-11-05 18:10:00.802192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:31.666 [2024-11-05 18:10:00.802206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:31.666 [2024-11-05 18:10:00.802216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.666 [2024-11-05 18:10:00.805204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.666 [2024-11-05 18:10:00.805242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.666 [2024-11-05 18:10:00.805255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.973 ms 00:20:31.666 [2024-11-05 18:10:00.805265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.666 [2024-11-05 18:10:00.805355] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:31.666 [2024-11-05 18:10:00.806276] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:31.666 [2024-11-05 18:10:00.806311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.666 [2024-11-05 18:10:00.806321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.666 [2024-11-05 18:10:00.806333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:20:31.666 [2024-11-05 18:10:00.806343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.666 [2024-11-05 18:10:00.807805] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:31.666 [2024-11-05 18:10:00.826236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.666 [2024-11-05 18:10:00.826278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:31.666 [2024-11-05 18:10:00.826291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.462 ms 00:20:31.666 [2024-11-05 18:10:00.826302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.666 [2024-11-05 18:10:00.826393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.666 [2024-11-05 18:10:00.826432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:31.666 [2024-11-05 18:10:00.826445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:31.666 [2024-11-05 18:10:00.826455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.666 [2024-11-05 18:10:00.833061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.666 [2024-11-05 18:10:00.833087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.666 [2024-11-05 18:10:00.833098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.577 ms 00:20:31.666 [2024-11-05 18:10:00.833108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.666 [2024-11-05 18:10:00.833195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.666 [2024-11-05 18:10:00.833210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.666 [2024-11-05 18:10:00.833221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:31.666 [2024-11-05 18:10:00.833233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.666 [2024-11-05 18:10:00.833259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.666 [2024-11-05 18:10:00.833274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:31.666 [2024-11-05 18:10:00.833285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:31.666 [2024-11-05 18:10:00.833295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.666 [2024-11-05 18:10:00.833315] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:31.666 [2024-11-05 18:10:00.837950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.666 [2024-11-05 18:10:00.837982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.666 [2024-11-05 18:10:00.837993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.646 ms 00:20:31.666 [2024-11-05 18:10:00.838003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.666 [2024-11-05 18:10:00.838064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.666 [2024-11-05 18:10:00.838076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:31.666 [2024-11-05 18:10:00.838087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:31.666 [2024-11-05 18:10:00.838098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.666 [2024-11-05 18:10:00.838117] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:31.666 [2024-11-05 18:10:00.838142] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:31.666 [2024-11-05 18:10:00.838176] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:31.666 [2024-11-05 18:10:00.838193] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:31.666 [2024-11-05 18:10:00.838275] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:31.666 [2024-11-05 18:10:00.838288] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:31.666 [2024-11-05 18:10:00.838300] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:31.666 [2024-11-05 18:10:00.838313] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:31.666 [2024-11-05 18:10:00.838344] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:31.666 [2024-11-05 18:10:00.838355] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:31.666 [2024-11-05 18:10:00.838365] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:31.667 [2024-11-05 18:10:00.838374] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:31.667 [2024-11-05 18:10:00.838384] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:31.667 [2024-11-05 18:10:00.838394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.667 [2024-11-05 18:10:00.838405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:31.667 [2024-11-05 18:10:00.838427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:20:31.667 [2024-11-05 18:10:00.838436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.667 [2024-11-05 18:10:00.838507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.667 [2024-11-05 18:10:00.838519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:31.667 [2024-11-05 18:10:00.838533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:31.667 [2024-11-05 18:10:00.838542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.667 [2024-11-05 18:10:00.838621] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:31.667 [2024-11-05 18:10:00.838635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:31.667 [2024-11-05 18:10:00.838645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.667 [2024-11-05 18:10:00.838656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.667 [2024-11-05 18:10:00.838666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:31.667 [2024-11-05 18:10:00.838675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:31.667 [2024-11-05 18:10:00.838685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:31.667 [2024-11-05 18:10:00.838695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:31.667 [2024-11-05 18:10:00.838704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:31.667 [2024-11-05 18:10:00.838715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.667 [2024-11-05 18:10:00.838725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:31.667 [2024-11-05 18:10:00.838734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:31.667 [2024-11-05 18:10:00.838742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.667 [2024-11-05 18:10:00.838761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:31.667 [2024-11-05 18:10:00.838770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:31.667 [2024-11-05 18:10:00.838779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.667 [2024-11-05 18:10:00.838788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:31.667 [2024-11-05 18:10:00.838797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:31.667 [2024-11-05 18:10:00.838805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.667 [2024-11-05 18:10:00.838814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:31.667 [2024-11-05 18:10:00.838822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:31.667 [2024-11-05 18:10:00.838831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.667 [2024-11-05 18:10:00.838840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:31.667 [2024-11-05 18:10:00.838849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:31.667 [2024-11-05 18:10:00.838858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.667 [2024-11-05 18:10:00.838867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:31.667 [2024-11-05 18:10:00.838876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:31.667 [2024-11-05 18:10:00.838884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.667 [2024-11-05 18:10:00.838893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:31.667 [2024-11-05 18:10:00.838902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:31.667 [2024-11-05 18:10:00.838910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:31.667 [2024-11-05 18:10:00.838918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:31.667 [2024-11-05 18:10:00.838926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:31.667 [2024-11-05 18:10:00.838935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.667 [2024-11-05 18:10:00.838943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:31.667 [2024-11-05 18:10:00.838952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:31.667 [2024-11-05 18:10:00.838960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.667 [2024-11-05 18:10:00.838968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:31.667 [2024-11-05 18:10:00.838976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:31.667 [2024-11-05 18:10:00.838986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.667 [2024-11-05 18:10:00.838994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:31.667 [2024-11-05 18:10:00.839003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:31.667 [2024-11-05 18:10:00.839011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.667 [2024-11-05 18:10:00.839020] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:31.667 [2024-11-05 18:10:00.839029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:31.667 [2024-11-05 18:10:00.839038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.667 [2024-11-05 18:10:00.839051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.667 [2024-11-05 18:10:00.839061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:31.667 [2024-11-05 18:10:00.839070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:31.667 [2024-11-05 18:10:00.839079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:31.667 [2024-11-05 18:10:00.839088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:31.667 [2024-11-05 18:10:00.839097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:31.667 [2024-11-05 18:10:00.839106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:31.667 [2024-11-05 18:10:00.839116] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:31.667 [2024-11-05 18:10:00.839128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.667 [2024-11-05 18:10:00.839138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:31.667 [2024-11-05 18:10:00.839148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:31.667 [2024-11-05 18:10:00.839158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:31.667 [2024-11-05 18:10:00.839167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:31.667 [2024-11-05 18:10:00.839177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:31.667 [2024-11-05 18:10:00.839187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:31.667 [2024-11-05 18:10:00.839196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:31.667 [2024-11-05 18:10:00.839205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:31.667 [2024-11-05 18:10:00.839215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:31.667 [2024-11-05 18:10:00.839224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:31.667 [2024-11-05 18:10:00.839234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:31.667 [2024-11-05 18:10:00.839244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:31.667 [2024-11-05 18:10:00.839253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:31.667 [2024-11-05 18:10:00.839264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:31.667 [2024-11-05 18:10:00.839275] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:31.667 [2024-11-05 18:10:00.839285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.667 [2024-11-05 18:10:00.839298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:31.667 [2024-11-05 18:10:00.839308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:31.667 [2024-11-05 18:10:00.839318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:31.667 [2024-11-05 18:10:00.839328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:31.667 [2024-11-05 18:10:00.839338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.667 [2024-11-05 18:10:00.839347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:31.667 [2024-11-05 18:10:00.839360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:20:31.667 [2024-11-05 18:10:00.839370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.667 [2024-11-05 18:10:00.878511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.667 [2024-11-05 18:10:00.878546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.667 [2024-11-05 18:10:00.878560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.132 ms 00:20:31.667 [2024-11-05 18:10:00.878570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.667 [2024-11-05 18:10:00.878678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.667 [2024-11-05 18:10:00.878695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:31.667 [2024-11-05 18:10:00.878706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:31.667 [2024-11-05 18:10:00.878716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.667 [2024-11-05 18:10:00.932838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.667 [2024-11-05 18:10:00.932874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.667 [2024-11-05 18:10:00.932886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.188 ms 00:20:31.667 [2024-11-05 18:10:00.932900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.668 [2024-11-05 18:10:00.932986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.668 [2024-11-05 18:10:00.932999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.668 [2024-11-05 18:10:00.933010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:31.668 [2024-11-05 18:10:00.933021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.668 [2024-11-05 18:10:00.933462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.668 [2024-11-05 18:10:00.933484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.668 [2024-11-05 18:10:00.933495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:20:31.668 [2024-11-05 18:10:00.933509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.668 [2024-11-05 18:10:00.933619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.668 [2024-11-05 18:10:00.933633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.668 [2024-11-05 18:10:00.933644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:31.668 [2024-11-05 18:10:00.933653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.668 [2024-11-05 18:10:00.952329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.668 [2024-11-05 18:10:00.952363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.668 [2024-11-05 18:10:00.952375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.676 ms 00:20:31.668 [2024-11-05 18:10:00.952385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.668 [2024-11-05 18:10:00.971203] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:31.668 [2024-11-05 18:10:00.971240] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:31.668 [2024-11-05 18:10:00.971255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.668 [2024-11-05 18:10:00.971266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:31.668 [2024-11-05 18:10:00.971276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.792 ms 00:20:31.668 [2024-11-05 18:10:00.971286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.927 [2024-11-05 18:10:00.999098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.927 [2024-11-05 18:10:00.999157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:31.927 [2024-11-05 18:10:00.999171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.782 ms 00:20:31.927 [2024-11-05 18:10:00.999182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.927 [2024-11-05 18:10:01.015888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.927 [2024-11-05 18:10:01.015923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:31.927 [2024-11-05 18:10:01.015935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.658 ms 00:20:31.927 [2024-11-05 18:10:01.015945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.927 [2024-11-05 18:10:01.032679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.927 [2024-11-05 18:10:01.032715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:31.927 [2024-11-05 18:10:01.032727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.690 ms 00:20:31.927 [2024-11-05 18:10:01.032737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.927 [2024-11-05 18:10:01.033448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.927 [2024-11-05 18:10:01.033476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:31.927 [2024-11-05 18:10:01.033488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:20:31.927 [2024-11-05 18:10:01.033498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.927 [2024-11-05 18:10:01.115051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.927 [2024-11-05 18:10:01.115113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:31.927 [2024-11-05 18:10:01.115129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.659 ms 00:20:31.927 [2024-11-05 18:10:01.115140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.927 [2024-11-05 18:10:01.125016] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:31.927 [2024-11-05 18:10:01.139897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.927 [2024-11-05 18:10:01.139937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:31.927 [2024-11-05 18:10:01.139951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.708 ms 00:20:31.927 [2024-11-05 18:10:01.139962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.927 [2024-11-05 18:10:01.140064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.927 [2024-11-05 18:10:01.140080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:31.927 [2024-11-05 18:10:01.140091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:31.927 [2024-11-05 18:10:01.140101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.927 [2024-11-05 18:10:01.140152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.927 [2024-11-05 18:10:01.140164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:31.927 [2024-11-05 18:10:01.140174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:31.927 [2024-11-05 18:10:01.140183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.927 [2024-11-05 18:10:01.140209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.927 [2024-11-05 18:10:01.140223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:31.927 [2024-11-05 18:10:01.140233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:31.927 [2024-11-05 18:10:01.140243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.927 [2024-11-05 18:10:01.140277] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:31.927 [2024-11-05 18:10:01.140289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.927 [2024-11-05 18:10:01.140299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:31.928 [2024-11-05 18:10:01.140309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:31.928 [2024-11-05 18:10:01.140318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.928 [2024-11-05 18:10:01.174169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.928 [2024-11-05 18:10:01.174207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:31.928 [2024-11-05 18:10:01.174221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.883 ms 00:20:31.928 [2024-11-05 18:10:01.174231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.928 [2024-11-05 18:10:01.174339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.928 [2024-11-05 18:10:01.174354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:31.928 [2024-11-05 18:10:01.174365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:31.928 [2024-11-05 18:10:01.174375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.928 [2024-11-05 18:10:01.175304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:31.928 [2024-11-05 18:10:01.179389] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 373.481 ms, result 0 00:20:31.928 [2024-11-05 18:10:01.180300] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:31.928 [2024-11-05 18:10:01.197845] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:32.186  [2024-11-05T18:10:01.509Z] Copying: 4096/4096 [kB] (average 22 MBps)[2024-11-05 18:10:01.382281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:32.186 [2024-11-05 18:10:01.395310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.186 [2024-11-05 18:10:01.395346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:32.186 [2024-11-05 18:10:01.395359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:32.186 [2024-11-05 18:10:01.395374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.186 [2024-11-05 18:10:01.395394] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:32.186 [2024-11-05 18:10:01.399469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.186 [2024-11-05 18:10:01.399497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:32.186 [2024-11-05 18:10:01.399508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.058 ms 00:20:32.186 [2024-11-05 18:10:01.399517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.186 [2024-11-05 18:10:01.401496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.186 [2024-11-05 18:10:01.401531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:32.186 [2024-11-05 18:10:01.401544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.960 ms 00:20:32.186 [2024-11-05 18:10:01.401554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.186 [2024-11-05 18:10:01.404599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.186 [2024-11-05 18:10:01.404646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:32.186 [2024-11-05 18:10:01.404658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.033 ms 00:20:32.186 [2024-11-05 18:10:01.404668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.186 [2024-11-05 18:10:01.409859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.186 [2024-11-05 18:10:01.409891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:32.186 [2024-11-05 18:10:01.409902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.169 ms 00:20:32.186 [2024-11-05 18:10:01.409912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.186 [2024-11-05 18:10:01.443731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.186 [2024-11-05 18:10:01.443767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:32.186 [2024-11-05 18:10:01.443780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.820 ms 00:20:32.186 [2024-11-05 18:10:01.443790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.186 [2024-11-05 18:10:01.463852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.186 [2024-11-05 18:10:01.463894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:32.186 [2024-11-05 18:10:01.463910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.026 ms 00:20:32.186 [2024-11-05 18:10:01.463920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.186 [2024-11-05 18:10:01.464037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.186 [2024-11-05 18:10:01.464050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:32.186 [2024-11-05 18:10:01.464061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:32.186 [2024-11-05 18:10:01.464070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.186 [2024-11-05 18:10:01.499040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.186 [2024-11-05 18:10:01.499075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:32.186 [2024-11-05 18:10:01.499087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.999 ms 00:20:32.186 [2024-11-05 18:10:01.499096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.446 [2024-11-05 18:10:01.533602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.446 [2024-11-05 18:10:01.533637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:32.446 [2024-11-05 18:10:01.533649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.511 ms 00:20:32.446 [2024-11-05 18:10:01.533658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.446 [2024-11-05 18:10:01.567187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.446 [2024-11-05 18:10:01.567222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:32.446 [2024-11-05 18:10:01.567233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.524 ms 00:20:32.446 [2024-11-05 18:10:01.567243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.446 [2024-11-05 18:10:01.600631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.446 [2024-11-05 18:10:01.600668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:32.446 [2024-11-05 18:10:01.600680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.370 ms 00:20:32.447 [2024-11-05 18:10:01.600689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.447 [2024-11-05 18:10:01.600739] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:32.447 [2024-11-05 18:10:01.600755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.600998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:32.447 [2024-11-05 18:10:01.601554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:32.448 [2024-11-05 18:10:01.601798] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:32.448 [2024-11-05 18:10:01.601808] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 18011f82-9c8c-4643-a4b7-0489ece3fd08 00:20:32.448 [2024-11-05 18:10:01.601820] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:32.448 [2024-11-05 18:10:01.601830] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:32.448 [2024-11-05 18:10:01.601838] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:32.448 [2024-11-05 18:10:01.601848] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:32.448 [2024-11-05 18:10:01.601858] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:32.448 [2024-11-05 18:10:01.601869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:32.448 [2024-11-05 18:10:01.601878] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:32.448 [2024-11-05 18:10:01.601886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:32.448 [2024-11-05 18:10:01.601894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:32.448 [2024-11-05 18:10:01.601903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.448 [2024-11-05 18:10:01.601917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:32.448 [2024-11-05 18:10:01.601927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:20:32.448 [2024-11-05 18:10:01.601936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.448 [2024-11-05 18:10:01.620443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.448 [2024-11-05 18:10:01.620475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:32.448 [2024-11-05 18:10:01.620487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.519 ms 00:20:32.448 [2024-11-05 18:10:01.620496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.448 [2024-11-05 18:10:01.620981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.448 [2024-11-05 18:10:01.621000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:32.448 [2024-11-05 18:10:01.621010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:20:32.448 [2024-11-05 18:10:01.621020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.448 [2024-11-05 18:10:01.672853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.448 [2024-11-05 18:10:01.672887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.448 [2024-11-05 18:10:01.672900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.448 [2024-11-05 18:10:01.672910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.448 [2024-11-05 18:10:01.672988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.448 [2024-11-05 18:10:01.672999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.448 [2024-11-05 18:10:01.673009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.448 [2024-11-05 18:10:01.673018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.448 [2024-11-05 18:10:01.673068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.448 [2024-11-05 18:10:01.673082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.448 [2024-11-05 18:10:01.673092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.448 [2024-11-05 18:10:01.673102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.448 [2024-11-05 18:10:01.673118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.448 [2024-11-05 18:10:01.673132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.448 [2024-11-05 18:10:01.673142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.448 [2024-11-05 18:10:01.673152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.708 [2024-11-05 18:10:01.789910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.708 [2024-11-05 18:10:01.789953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:32.708 [2024-11-05 18:10:01.789967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.708 [2024-11-05 18:10:01.789977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.708 [2024-11-05 18:10:01.886025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.708 [2024-11-05 18:10:01.886070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:32.708 [2024-11-05 18:10:01.886084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.708 [2024-11-05 18:10:01.886094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.708 [2024-11-05 18:10:01.886155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.708 [2024-11-05 18:10:01.886167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:32.708 [2024-11-05 18:10:01.886177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.708 [2024-11-05 18:10:01.886187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.708 [2024-11-05 18:10:01.886214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.708 [2024-11-05 18:10:01.886225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:32.708 [2024-11-05 18:10:01.886241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.708 [2024-11-05 18:10:01.886251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.708 [2024-11-05 18:10:01.886350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.708 [2024-11-05 18:10:01.886363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:32.708 [2024-11-05 18:10:01.886373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.708 [2024-11-05 18:10:01.886382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.708 [2024-11-05 18:10:01.886434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.708 [2024-11-05 18:10:01.886447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:32.708 [2024-11-05 18:10:01.886457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.708 [2024-11-05 18:10:01.886472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.708 [2024-11-05 18:10:01.886508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.708 [2024-11-05 18:10:01.886520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:32.708 [2024-11-05 18:10:01.886529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.708 [2024-11-05 18:10:01.886539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.708 [2024-11-05 18:10:01.886581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:32.708 [2024-11-05 18:10:01.886593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:32.708 [2024-11-05 18:10:01.886606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:32.708 [2024-11-05 18:10:01.886615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.708 [2024-11-05 18:10:01.886742] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 492.219 ms, result 0 00:20:33.647 00:20:33.647 00:20:33.647 18:10:02 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=75690 00:20:33.647 18:10:02 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:33.647 18:10:02 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 75690 00:20:33.647 18:10:02 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75690 ']' 00:20:33.647 18:10:02 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.647 18:10:02 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:33.647 18:10:02 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.647 18:10:02 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:33.647 18:10:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:33.906 [2024-11-05 18:10:02.999601] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:33.906 [2024-11-05 18:10:02.999730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75690 ] 00:20:33.906 [2024-11-05 18:10:03.174834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.165 [2024-11-05 18:10:03.277050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.104 18:10:04 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:35.104 18:10:04 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:20:35.104 18:10:04 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:35.104 [2024-11-05 18:10:04.283055] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:35.104 [2024-11-05 18:10:04.283117] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:35.365 [2024-11-05 18:10:04.468403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.365 [2024-11-05 18:10:04.468464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:35.365 [2024-11-05 18:10:04.468482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:35.365 [2024-11-05 18:10:04.468492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.365 [2024-11-05 18:10:04.472129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.365 [2024-11-05 18:10:04.472168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:35.365 [2024-11-05 18:10:04.472183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.621 ms 00:20:35.365 [2024-11-05 18:10:04.472193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.365 [2024-11-05 18:10:04.472302] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:35.365 [2024-11-05 18:10:04.473267] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:35.365 [2024-11-05 18:10:04.473302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.365 [2024-11-05 18:10:04.473314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:35.365 [2024-11-05 18:10:04.473327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:20:35.365 [2024-11-05 18:10:04.473337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.365 [2024-11-05 18:10:04.474972] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:35.365 [2024-11-05 18:10:04.492760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.365 [2024-11-05 18:10:04.492807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:35.365 [2024-11-05 18:10:04.492821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.821 ms 00:20:35.365 [2024-11-05 18:10:04.492836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.365 [2024-11-05 18:10:04.492928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.365 [2024-11-05 18:10:04.492947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:35.365 [2024-11-05 18:10:04.492959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:35.365 [2024-11-05 18:10:04.492973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.365 [2024-11-05 18:10:04.499670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.365 [2024-11-05 18:10:04.499712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:35.365 [2024-11-05 18:10:04.499724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.655 ms 00:20:35.365 [2024-11-05 18:10:04.499739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.365 [2024-11-05 18:10:04.499866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.365 [2024-11-05 18:10:04.499885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:35.365 [2024-11-05 18:10:04.499896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:35.365 [2024-11-05 18:10:04.499910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.365 [2024-11-05 18:10:04.499947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.365 [2024-11-05 18:10:04.499964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:35.365 [2024-11-05 18:10:04.499974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:35.365 [2024-11-05 18:10:04.499989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.365 [2024-11-05 18:10:04.500012] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:35.365 [2024-11-05 18:10:04.504663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.365 [2024-11-05 18:10:04.504695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:35.365 [2024-11-05 18:10:04.504711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.659 ms 00:20:35.365 [2024-11-05 18:10:04.504721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.365 [2024-11-05 18:10:04.504791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.365 [2024-11-05 18:10:04.504804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:35.365 [2024-11-05 18:10:04.504819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:35.365 [2024-11-05 18:10:04.504834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.365 [2024-11-05 18:10:04.504860] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:35.365 [2024-11-05 18:10:04.504883] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:35.365 [2024-11-05 18:10:04.504929] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:35.365 [2024-11-05 18:10:04.504948] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:35.365 [2024-11-05 18:10:04.505051] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:35.365 [2024-11-05 18:10:04.505066] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:35.365 [2024-11-05 18:10:04.505085] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:35.365 [2024-11-05 18:10:04.505103] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:35.365 [2024-11-05 18:10:04.505121] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:35.365 [2024-11-05 18:10:04.505132] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:35.365 [2024-11-05 18:10:04.505148] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:35.365 [2024-11-05 18:10:04.505158] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:35.365 [2024-11-05 18:10:04.505178] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:35.365 [2024-11-05 18:10:04.505189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.365 [2024-11-05 18:10:04.505204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:35.365 [2024-11-05 18:10:04.505215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:20:35.365 [2024-11-05 18:10:04.505230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.365 [2024-11-05 18:10:04.505310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.365 [2024-11-05 18:10:04.505344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:35.365 [2024-11-05 18:10:04.505355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:35.365 [2024-11-05 18:10:04.505370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.365 [2024-11-05 18:10:04.505481] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:35.365 [2024-11-05 18:10:04.505502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:35.365 [2024-11-05 18:10:04.505514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.365 [2024-11-05 18:10:04.505529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.365 [2024-11-05 18:10:04.505540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:35.365 [2024-11-05 18:10:04.505554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:35.365 [2024-11-05 18:10:04.505565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:35.365 [2024-11-05 18:10:04.505586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:35.365 [2024-11-05 18:10:04.505597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:35.365 [2024-11-05 18:10:04.505612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.365 [2024-11-05 18:10:04.505622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:35.365 [2024-11-05 18:10:04.505637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:35.365 [2024-11-05 18:10:04.505647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.365 [2024-11-05 18:10:04.505662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:35.365 [2024-11-05 18:10:04.505680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:35.365 [2024-11-05 18:10:04.505694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.365 [2024-11-05 18:10:04.505704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:35.365 [2024-11-05 18:10:04.505718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:35.365 [2024-11-05 18:10:04.505728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.365 [2024-11-05 18:10:04.505743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:35.365 [2024-11-05 18:10:04.505763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:35.365 [2024-11-05 18:10:04.505777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.365 [2024-11-05 18:10:04.505787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:35.365 [2024-11-05 18:10:04.505806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:35.365 [2024-11-05 18:10:04.505816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.365 [2024-11-05 18:10:04.505830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:35.365 [2024-11-05 18:10:04.505840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:35.366 [2024-11-05 18:10:04.505855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.366 [2024-11-05 18:10:04.505864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:35.366 [2024-11-05 18:10:04.505878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:35.366 [2024-11-05 18:10:04.505888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.366 [2024-11-05 18:10:04.505903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:35.366 [2024-11-05 18:10:04.505913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:35.366 [2024-11-05 18:10:04.505927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.366 [2024-11-05 18:10:04.505937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:35.366 [2024-11-05 18:10:04.505952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:35.366 [2024-11-05 18:10:04.505961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.366 [2024-11-05 18:10:04.505976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:35.366 [2024-11-05 18:10:04.505985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:35.366 [2024-11-05 18:10:04.506004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.366 [2024-11-05 18:10:04.506014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:35.366 [2024-11-05 18:10:04.506028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:35.366 [2024-11-05 18:10:04.506038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.366 [2024-11-05 18:10:04.506053] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:35.366 [2024-11-05 18:10:04.506064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:35.366 [2024-11-05 18:10:04.506084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.366 [2024-11-05 18:10:04.506094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.366 [2024-11-05 18:10:04.506109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:35.366 [2024-11-05 18:10:04.506119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:35.366 [2024-11-05 18:10:04.506134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:35.366 [2024-11-05 18:10:04.506144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:35.366 [2024-11-05 18:10:04.506158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:35.366 [2024-11-05 18:10:04.506167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:35.366 [2024-11-05 18:10:04.506183] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:35.366 [2024-11-05 18:10:04.506196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.366 [2024-11-05 18:10:04.506218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:35.366 [2024-11-05 18:10:04.506229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:35.366 [2024-11-05 18:10:04.506244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:35.366 [2024-11-05 18:10:04.506255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:35.366 [2024-11-05 18:10:04.506270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:35.366 [2024-11-05 18:10:04.506280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:35.366 [2024-11-05 18:10:04.506296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:35.366 [2024-11-05 18:10:04.506306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:35.366 [2024-11-05 18:10:04.506322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:35.366 [2024-11-05 18:10:04.506333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:35.366 [2024-11-05 18:10:04.506348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:35.366 [2024-11-05 18:10:04.506358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:35.366 [2024-11-05 18:10:04.506373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:35.366 [2024-11-05 18:10:04.506384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:35.366 [2024-11-05 18:10:04.506400] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:35.366 [2024-11-05 18:10:04.506421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.366 [2024-11-05 18:10:04.506442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:35.366 [2024-11-05 18:10:04.506453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:35.366 [2024-11-05 18:10:04.506468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:35.366 [2024-11-05 18:10:04.506479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:35.366 [2024-11-05 18:10:04.506497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.366 [2024-11-05 18:10:04.506508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:35.366 [2024-11-05 18:10:04.506524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:20:35.366 [2024-11-05 18:10:04.506534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.366 [2024-11-05 18:10:04.545967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.366 [2024-11-05 18:10:04.546006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:35.366 [2024-11-05 18:10:04.546023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.427 ms 00:20:35.366 [2024-11-05 18:10:04.546033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.366 [2024-11-05 18:10:04.546148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.366 [2024-11-05 18:10:04.546161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:35.366 [2024-11-05 18:10:04.546176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:35.366 [2024-11-05 18:10:04.546186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.366 [2024-11-05 18:10:04.592290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.366 [2024-11-05 18:10:04.592325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.366 [2024-11-05 18:10:04.592348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.149 ms 00:20:35.366 [2024-11-05 18:10:04.592359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.366 [2024-11-05 18:10:04.592450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.366 [2024-11-05 18:10:04.592463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:35.366 [2024-11-05 18:10:04.592479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:35.366 [2024-11-05 18:10:04.592489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.366 [2024-11-05 18:10:04.592940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.366 [2024-11-05 18:10:04.592962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:35.366 [2024-11-05 18:10:04.592985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:20:35.366 [2024-11-05 18:10:04.592995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.366 [2024-11-05 18:10:04.593116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.366 [2024-11-05 18:10:04.593129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:35.366 [2024-11-05 18:10:04.593145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:35.366 [2024-11-05 18:10:04.593156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.366 [2024-11-05 18:10:04.612367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.366 [2024-11-05 18:10:04.612401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:35.366 [2024-11-05 18:10:04.612426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.212 ms 00:20:35.366 [2024-11-05 18:10:04.612437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.366 [2024-11-05 18:10:04.630952] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:35.366 [2024-11-05 18:10:04.631007] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:35.366 [2024-11-05 18:10:04.631029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.366 [2024-11-05 18:10:04.631039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:35.366 [2024-11-05 18:10:04.631055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.517 ms 00:20:35.366 [2024-11-05 18:10:04.631065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.366 [2024-11-05 18:10:04.658500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.366 [2024-11-05 18:10:04.658540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:35.366 [2024-11-05 18:10:04.658558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.395 ms 00:20:35.366 [2024-11-05 18:10:04.658569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.366 [2024-11-05 18:10:04.675518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.366 [2024-11-05 18:10:04.675572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:35.366 [2024-11-05 18:10:04.675595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.887 ms 00:20:35.366 [2024-11-05 18:10:04.675606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.626 [2024-11-05 18:10:04.693221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.626 [2024-11-05 18:10:04.693258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:35.626 [2024-11-05 18:10:04.693291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.561 ms 00:20:35.626 [2024-11-05 18:10:04.693301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.626 [2024-11-05 18:10:04.694078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.626 [2024-11-05 18:10:04.694113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:35.626 [2024-11-05 18:10:04.694130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:20:35.626 [2024-11-05 18:10:04.694141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.626 [2024-11-05 18:10:04.807380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.626 [2024-11-05 18:10:04.807444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:35.626 [2024-11-05 18:10:04.807466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.387 ms 00:20:35.626 [2024-11-05 18:10:04.807477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.626 [2024-11-05 18:10:04.817565] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:35.626 [2024-11-05 18:10:04.833014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.626 [2024-11-05 18:10:04.833088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:35.626 [2024-11-05 18:10:04.833110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.468 ms 00:20:35.626 [2024-11-05 18:10:04.833125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.626 [2024-11-05 18:10:04.833217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.626 [2024-11-05 18:10:04.833237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:35.626 [2024-11-05 18:10:04.833248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:35.626 [2024-11-05 18:10:04.833263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.626 [2024-11-05 18:10:04.833316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.626 [2024-11-05 18:10:04.833333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:35.626 [2024-11-05 18:10:04.833344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:35.626 [2024-11-05 18:10:04.833359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.626 [2024-11-05 18:10:04.833389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.626 [2024-11-05 18:10:04.833405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:35.626 [2024-11-05 18:10:04.833428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:35.626 [2024-11-05 18:10:04.833443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.626 [2024-11-05 18:10:04.833487] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:35.626 [2024-11-05 18:10:04.833511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.626 [2024-11-05 18:10:04.833522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:35.626 [2024-11-05 18:10:04.833544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:35.626 [2024-11-05 18:10:04.833554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.626 [2024-11-05 18:10:04.868317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.626 [2024-11-05 18:10:04.868359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:35.626 [2024-11-05 18:10:04.868379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.780 ms 00:20:35.626 [2024-11-05 18:10:04.868389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.626 [2024-11-05 18:10:04.868519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.626 [2024-11-05 18:10:04.868534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:35.626 [2024-11-05 18:10:04.868550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:35.626 [2024-11-05 18:10:04.868565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.626 [2024-11-05 18:10:04.869606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:35.626 [2024-11-05 18:10:04.873558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.481 ms, result 0 00:20:35.627 [2024-11-05 18:10:04.874812] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:35.627 Some configs were skipped because the RPC state that can call them passed over. 00:20:35.627 18:10:04 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:35.886 [2024-11-05 18:10:05.117394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.886 [2024-11-05 18:10:05.117461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:35.886 [2024-11-05 18:10:05.117475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 00:20:35.886 [2024-11-05 18:10:05.117491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.886 [2024-11-05 18:10:05.117528] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.569 ms, result 0 00:20:35.886 true 00:20:35.886 18:10:05 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:36.146 [2024-11-05 18:10:05.313160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.146 [2024-11-05 18:10:05.313202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:36.146 [2024-11-05 18:10:05.313222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:20:36.146 [2024-11-05 18:10:05.313233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.146 [2024-11-05 18:10:05.313279] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.425 ms, result 0 00:20:36.146 true 00:20:36.146 18:10:05 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 75690 00:20:36.146 18:10:05 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75690 ']' 00:20:36.146 18:10:05 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75690 00:20:36.146 18:10:05 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:20:36.146 18:10:05 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.146 18:10:05 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75690 00:20:36.146 killing process with pid 75690 00:20:36.146 18:10:05 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:36.146 18:10:05 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:36.146 18:10:05 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75690' 00:20:36.146 18:10:05 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75690 00:20:36.146 18:10:05 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75690 00:20:37.526 [2024-11-05 18:10:06.429868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.526 [2024-11-05 18:10:06.429940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:37.526 [2024-11-05 18:10:06.429956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:37.526 [2024-11-05 18:10:06.429968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.526 [2024-11-05 18:10:06.429990] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:37.526 [2024-11-05 18:10:06.433875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.526 [2024-11-05 18:10:06.433910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:37.526 [2024-11-05 18:10:06.433927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.870 ms 00:20:37.526 [2024-11-05 18:10:06.433937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.526 [2024-11-05 18:10:06.434197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.526 [2024-11-05 18:10:06.434212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:37.526 [2024-11-05 18:10:06.434224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:20:37.526 [2024-11-05 18:10:06.434235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.526 [2024-11-05 18:10:06.437468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.526 [2024-11-05 18:10:06.437504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:37.526 [2024-11-05 18:10:06.437523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.215 ms 00:20:37.526 [2024-11-05 18:10:06.437533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.526 [2024-11-05 18:10:06.443057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.526 [2024-11-05 18:10:06.443095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:37.526 [2024-11-05 18:10:06.443108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.491 ms 00:20:37.526 [2024-11-05 18:10:06.443119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.526 [2024-11-05 18:10:06.457001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.526 [2024-11-05 18:10:06.457036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:37.527 [2024-11-05 18:10:06.457054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.846 ms 00:20:37.527 [2024-11-05 18:10:06.457072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.527 [2024-11-05 18:10:06.467144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.527 [2024-11-05 18:10:06.467183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:37.527 [2024-11-05 18:10:06.467200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.020 ms 00:20:37.527 [2024-11-05 18:10:06.467210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.527 [2024-11-05 18:10:06.467343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.527 [2024-11-05 18:10:06.467357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:37.527 [2024-11-05 18:10:06.467370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:37.527 [2024-11-05 18:10:06.467379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.527 [2024-11-05 18:10:06.482404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.527 [2024-11-05 18:10:06.482443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:37.527 [2024-11-05 18:10:06.482474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.026 ms 00:20:37.527 [2024-11-05 18:10:06.482484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.527 [2024-11-05 18:10:06.496816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.527 [2024-11-05 18:10:06.496850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:37.527 [2024-11-05 18:10:06.496872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.297 ms 00:20:37.527 [2024-11-05 18:10:06.496881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.527 [2024-11-05 18:10:06.510964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.527 [2024-11-05 18:10:06.511000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:37.527 [2024-11-05 18:10:06.511020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.050 ms 00:20:37.527 [2024-11-05 18:10:06.511029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.527 [2024-11-05 18:10:06.524637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.527 [2024-11-05 18:10:06.524671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:37.527 [2024-11-05 18:10:06.524688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.552 ms 00:20:37.527 [2024-11-05 18:10:06.524697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.527 [2024-11-05 18:10:06.524760] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:37.527 [2024-11-05 18:10:06.524776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.524976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:37.527 [2024-11-05 18:10:06.525729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.525991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.526008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.526019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.526034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.526044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.526059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.526070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.526102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.526114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.526129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.526141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.526158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.526169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.526185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:37.528 [2024-11-05 18:10:06.526203] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:37.528 [2024-11-05 18:10:06.526229] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 18011f82-9c8c-4643-a4b7-0489ece3fd08 00:20:37.528 [2024-11-05 18:10:06.526251] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:37.528 [2024-11-05 18:10:06.526274] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:37.528 [2024-11-05 18:10:06.526285] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:37.528 [2024-11-05 18:10:06.526300] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:37.528 [2024-11-05 18:10:06.526310] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:37.528 [2024-11-05 18:10:06.526325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:37.528 [2024-11-05 18:10:06.526336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:37.528 [2024-11-05 18:10:06.526350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:37.528 [2024-11-05 18:10:06.526359] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:37.528 [2024-11-05 18:10:06.526375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.528 [2024-11-05 18:10:06.526386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:37.528 [2024-11-05 18:10:06.526402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:20:37.528 [2024-11-05 18:10:06.526422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.528 [2024-11-05 18:10:06.544399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.528 [2024-11-05 18:10:06.544444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:37.528 [2024-11-05 18:10:06.544466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.960 ms 00:20:37.528 [2024-11-05 18:10:06.544476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.528 [2024-11-05 18:10:06.545024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:37.528 [2024-11-05 18:10:06.545053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:37.528 [2024-11-05 18:10:06.545070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:20:37.528 [2024-11-05 18:10:06.545085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.528 [2024-11-05 18:10:06.608708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.528 [2024-11-05 18:10:06.608747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:37.528 [2024-11-05 18:10:06.608764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.528 [2024-11-05 18:10:06.608774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.528 [2024-11-05 18:10:06.608854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.528 [2024-11-05 18:10:06.608865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:37.528 [2024-11-05 18:10:06.608881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.528 [2024-11-05 18:10:06.608896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.528 [2024-11-05 18:10:06.608951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.528 [2024-11-05 18:10:06.608963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:37.528 [2024-11-05 18:10:06.608982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.528 [2024-11-05 18:10:06.608992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.528 [2024-11-05 18:10:06.609015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.528 [2024-11-05 18:10:06.609025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:37.528 [2024-11-05 18:10:06.609039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.528 [2024-11-05 18:10:06.609049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.528 [2024-11-05 18:10:06.724591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.528 [2024-11-05 18:10:06.724640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:37.528 [2024-11-05 18:10:06.724660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.528 [2024-11-05 18:10:06.724671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.528 [2024-11-05 18:10:06.817759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.528 [2024-11-05 18:10:06.817808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:37.528 [2024-11-05 18:10:06.817827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.528 [2024-11-05 18:10:06.817842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.528 [2024-11-05 18:10:06.817917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.528 [2024-11-05 18:10:06.817930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:37.528 [2024-11-05 18:10:06.817950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.528 [2024-11-05 18:10:06.817961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.528 [2024-11-05 18:10:06.817994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.528 [2024-11-05 18:10:06.818004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:37.528 [2024-11-05 18:10:06.818018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.528 [2024-11-05 18:10:06.818028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.528 [2024-11-05 18:10:06.818134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.528 [2024-11-05 18:10:06.818148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:37.528 [2024-11-05 18:10:06.818163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.528 [2024-11-05 18:10:06.818189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.528 [2024-11-05 18:10:06.818233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.528 [2024-11-05 18:10:06.818246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:37.528 [2024-11-05 18:10:06.818261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.528 [2024-11-05 18:10:06.818271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.528 [2024-11-05 18:10:06.818316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.528 [2024-11-05 18:10:06.818334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:37.528 [2024-11-05 18:10:06.818353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.529 [2024-11-05 18:10:06.818365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.529 [2024-11-05 18:10:06.818429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:37.529 [2024-11-05 18:10:06.818442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:37.529 [2024-11-05 18:10:06.818458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:37.529 [2024-11-05 18:10:06.818468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:37.529 [2024-11-05 18:10:06.818616] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 389.348 ms, result 0 00:20:38.467 18:10:07 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:38.726 [2024-11-05 18:10:07.862188] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:38.726 [2024-11-05 18:10:07.862311] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75748 ] 00:20:38.726 [2024-11-05 18:10:08.038956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.985 [2024-11-05 18:10:08.151492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.244 [2024-11-05 18:10:08.496544] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:39.244 [2024-11-05 18:10:08.496611] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:39.505 [2024-11-05 18:10:08.659567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-11-05 18:10:08.659614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:39.505 [2024-11-05 18:10:08.659632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.505 [2024-11-05 18:10:08.659642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-11-05 18:10:08.662551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-11-05 18:10:08.662588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.505 [2024-11-05 18:10:08.662601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.893 ms 00:20:39.505 [2024-11-05 18:10:08.662610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-11-05 18:10:08.662699] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:39.505 [2024-11-05 18:10:08.663557] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:39.505 [2024-11-05 18:10:08.663589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-11-05 18:10:08.663600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.505 [2024-11-05 18:10:08.663611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:20:39.505 [2024-11-05 18:10:08.663621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-11-05 18:10:08.665055] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:39.505 [2024-11-05 18:10:08.682953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-11-05 18:10:08.682996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:39.505 [2024-11-05 18:10:08.683010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.927 ms 00:20:39.505 [2024-11-05 18:10:08.683020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-11-05 18:10:08.683110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-11-05 18:10:08.683125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:39.505 [2024-11-05 18:10:08.683136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:39.505 [2024-11-05 18:10:08.683146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-11-05 18:10:08.689784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-11-05 18:10:08.689813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.505 [2024-11-05 18:10:08.689824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.610 ms 00:20:39.505 [2024-11-05 18:10:08.689834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-11-05 18:10:08.689924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.505 [2024-11-05 18:10:08.689937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.505 [2024-11-05 18:10:08.689949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:39.505 [2024-11-05 18:10:08.689959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.505 [2024-11-05 18:10:08.689985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.506 [2024-11-05 18:10:08.690000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:39.506 [2024-11-05 18:10:08.690010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.506 [2024-11-05 18:10:08.690020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.506 [2024-11-05 18:10:08.690040] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:39.506 [2024-11-05 18:10:08.694330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.506 [2024-11-05 18:10:08.694361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.506 [2024-11-05 18:10:08.694374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.301 ms 00:20:39.506 [2024-11-05 18:10:08.694385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.506 [2024-11-05 18:10:08.694454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.506 [2024-11-05 18:10:08.694467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:39.506 [2024-11-05 18:10:08.694478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:39.506 [2024-11-05 18:10:08.694488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.506 [2024-11-05 18:10:08.694510] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:39.506 [2024-11-05 18:10:08.694535] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:39.506 [2024-11-05 18:10:08.694568] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:39.506 [2024-11-05 18:10:08.694585] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:39.506 [2024-11-05 18:10:08.694669] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:39.506 [2024-11-05 18:10:08.694681] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:39.506 [2024-11-05 18:10:08.694694] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:39.506 [2024-11-05 18:10:08.694708] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:39.506 [2024-11-05 18:10:08.694723] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:39.506 [2024-11-05 18:10:08.694733] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:39.506 [2024-11-05 18:10:08.694743] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:39.506 [2024-11-05 18:10:08.694752] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:39.506 [2024-11-05 18:10:08.694762] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:39.506 [2024-11-05 18:10:08.694773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.506 [2024-11-05 18:10:08.694784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:39.506 [2024-11-05 18:10:08.694793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:39.506 [2024-11-05 18:10:08.694803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.506 [2024-11-05 18:10:08.694874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.506 [2024-11-05 18:10:08.694885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:39.506 [2024-11-05 18:10:08.694898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:39.506 [2024-11-05 18:10:08.694907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.506 [2024-11-05 18:10:08.694989] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:39.506 [2024-11-05 18:10:08.695002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:39.506 [2024-11-05 18:10:08.695013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.506 [2024-11-05 18:10:08.695024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.506 [2024-11-05 18:10:08.695034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:39.506 [2024-11-05 18:10:08.695045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:39.506 [2024-11-05 18:10:08.695054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:39.506 [2024-11-05 18:10:08.695064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:39.506 [2024-11-05 18:10:08.695074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:39.506 [2024-11-05 18:10:08.695084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.506 [2024-11-05 18:10:08.695095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:39.506 [2024-11-05 18:10:08.695104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:39.506 [2024-11-05 18:10:08.695113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.506 [2024-11-05 18:10:08.695133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:39.506 [2024-11-05 18:10:08.695142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:39.506 [2024-11-05 18:10:08.695151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.506 [2024-11-05 18:10:08.695160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:39.506 [2024-11-05 18:10:08.695169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:39.506 [2024-11-05 18:10:08.695178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.506 [2024-11-05 18:10:08.695188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:39.506 [2024-11-05 18:10:08.695197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:39.506 [2024-11-05 18:10:08.695206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.506 [2024-11-05 18:10:08.695215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:39.506 [2024-11-05 18:10:08.695225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:39.506 [2024-11-05 18:10:08.695233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.506 [2024-11-05 18:10:08.695242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:39.506 [2024-11-05 18:10:08.695251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:39.506 [2024-11-05 18:10:08.695259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.506 [2024-11-05 18:10:08.695268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:39.506 [2024-11-05 18:10:08.695277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:39.506 [2024-11-05 18:10:08.695286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.506 [2024-11-05 18:10:08.695295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:39.506 [2024-11-05 18:10:08.695304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:39.506 [2024-11-05 18:10:08.695312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.506 [2024-11-05 18:10:08.695321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:39.506 [2024-11-05 18:10:08.695329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:39.506 [2024-11-05 18:10:08.695338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.506 [2024-11-05 18:10:08.695347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:39.506 [2024-11-05 18:10:08.695355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:39.506 [2024-11-05 18:10:08.695364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.506 [2024-11-05 18:10:08.695373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:39.506 [2024-11-05 18:10:08.695383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:39.506 [2024-11-05 18:10:08.695392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.506 [2024-11-05 18:10:08.695401] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:39.506 [2024-11-05 18:10:08.695432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:39.506 [2024-11-05 18:10:08.695442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.506 [2024-11-05 18:10:08.695456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.506 [2024-11-05 18:10:08.695468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:39.506 [2024-11-05 18:10:08.695477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:39.506 [2024-11-05 18:10:08.695486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:39.506 [2024-11-05 18:10:08.695495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:39.506 [2024-11-05 18:10:08.695504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:39.506 [2024-11-05 18:10:08.695513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:39.506 [2024-11-05 18:10:08.695524] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:39.506 [2024-11-05 18:10:08.695536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.506 [2024-11-05 18:10:08.695547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:39.506 [2024-11-05 18:10:08.695557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:39.506 [2024-11-05 18:10:08.695567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:39.506 [2024-11-05 18:10:08.695576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:39.506 [2024-11-05 18:10:08.695586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:39.506 [2024-11-05 18:10:08.695595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:39.506 [2024-11-05 18:10:08.695605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:39.506 [2024-11-05 18:10:08.695615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:39.506 [2024-11-05 18:10:08.695625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:39.506 [2024-11-05 18:10:08.695635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:39.506 [2024-11-05 18:10:08.695646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:39.506 [2024-11-05 18:10:08.695656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:39.506 [2024-11-05 18:10:08.695666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:39.507 [2024-11-05 18:10:08.695676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:39.507 [2024-11-05 18:10:08.695685] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:39.507 [2024-11-05 18:10:08.695695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.507 [2024-11-05 18:10:08.695706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:39.507 [2024-11-05 18:10:08.695716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:39.507 [2024-11-05 18:10:08.695725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:39.507 [2024-11-05 18:10:08.695737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:39.507 [2024-11-05 18:10:08.695747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.507 [2024-11-05 18:10:08.695764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:39.507 [2024-11-05 18:10:08.695778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:20:39.507 [2024-11-05 18:10:08.695788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.507 [2024-11-05 18:10:08.734310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.507 [2024-11-05 18:10:08.734346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:39.507 [2024-11-05 18:10:08.734359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.535 ms 00:20:39.507 [2024-11-05 18:10:08.734370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.507 [2024-11-05 18:10:08.734486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.507 [2024-11-05 18:10:08.734504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:39.507 [2024-11-05 18:10:08.734515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:39.507 [2024-11-05 18:10:08.734525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.507 [2024-11-05 18:10:08.803833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.507 [2024-11-05 18:10:08.803871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:39.507 [2024-11-05 18:10:08.803885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.396 ms 00:20:39.507 [2024-11-05 18:10:08.803899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.507 [2024-11-05 18:10:08.803993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.507 [2024-11-05 18:10:08.804007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:39.507 [2024-11-05 18:10:08.804019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:39.507 [2024-11-05 18:10:08.804029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.507 [2024-11-05 18:10:08.804469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.507 [2024-11-05 18:10:08.804490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:39.507 [2024-11-05 18:10:08.804502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:20:39.507 [2024-11-05 18:10:08.804516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.507 [2024-11-05 18:10:08.804626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.507 [2024-11-05 18:10:08.804640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:39.507 [2024-11-05 18:10:08.804651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:39.507 [2024-11-05 18:10:08.804660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.507 [2024-11-05 18:10:08.821623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.507 [2024-11-05 18:10:08.821657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:39.507 [2024-11-05 18:10:08.821679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.969 ms 00:20:39.507 [2024-11-05 18:10:08.821690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.766 [2024-11-05 18:10:08.839871] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:39.766 [2024-11-05 18:10:08.839909] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:39.766 [2024-11-05 18:10:08.839924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.766 [2024-11-05 18:10:08.839935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:39.766 [2024-11-05 18:10:08.839947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.161 ms 00:20:39.766 [2024-11-05 18:10:08.839956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.766 [2024-11-05 18:10:08.867861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.766 [2024-11-05 18:10:08.867910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:39.766 [2024-11-05 18:10:08.867923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.872 ms 00:20:39.766 [2024-11-05 18:10:08.867934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.766 [2024-11-05 18:10:08.884708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.766 [2024-11-05 18:10:08.884744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:39.766 [2024-11-05 18:10:08.884757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.724 ms 00:20:39.766 [2024-11-05 18:10:08.884767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.766 [2024-11-05 18:10:08.901635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.766 [2024-11-05 18:10:08.901677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:39.766 [2024-11-05 18:10:08.901690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.822 ms 00:20:39.767 [2024-11-05 18:10:08.901700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.767 [2024-11-05 18:10:08.902351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.767 [2024-11-05 18:10:08.902383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:39.767 [2024-11-05 18:10:08.902394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:20:39.767 [2024-11-05 18:10:08.902405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.767 [2024-11-05 18:10:08.981328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.767 [2024-11-05 18:10:08.981381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:39.767 [2024-11-05 18:10:08.981397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.005 ms 00:20:39.767 [2024-11-05 18:10:08.981417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.767 [2024-11-05 18:10:08.992115] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:39.767 [2024-11-05 18:10:09.007775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.767 [2024-11-05 18:10:09.007818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:39.767 [2024-11-05 18:10:09.007834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.305 ms 00:20:39.767 [2024-11-05 18:10:09.007845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.767 [2024-11-05 18:10:09.007941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.767 [2024-11-05 18:10:09.007956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:39.767 [2024-11-05 18:10:09.007967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:39.767 [2024-11-05 18:10:09.007978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.767 [2024-11-05 18:10:09.008030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.767 [2024-11-05 18:10:09.008041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:39.767 [2024-11-05 18:10:09.008052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:39.767 [2024-11-05 18:10:09.008062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.767 [2024-11-05 18:10:09.008089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.767 [2024-11-05 18:10:09.008102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:39.767 [2024-11-05 18:10:09.008112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.767 [2024-11-05 18:10:09.008122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.767 [2024-11-05 18:10:09.008157] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:39.767 [2024-11-05 18:10:09.008171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.767 [2024-11-05 18:10:09.008181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:39.767 [2024-11-05 18:10:09.008191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:39.767 [2024-11-05 18:10:09.008201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.767 [2024-11-05 18:10:09.041563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.767 [2024-11-05 18:10:09.041602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:39.767 [2024-11-05 18:10:09.041616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.394 ms 00:20:39.767 [2024-11-05 18:10:09.041627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.767 [2024-11-05 18:10:09.041746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.767 [2024-11-05 18:10:09.041761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:39.767 [2024-11-05 18:10:09.041772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:39.767 [2024-11-05 18:10:09.041783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.767 [2024-11-05 18:10:09.042680] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:39.767 [2024-11-05 18:10:09.046511] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.441 ms, result 0 00:20:39.767 [2024-11-05 18:10:09.047487] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:39.767 [2024-11-05 18:10:09.064866] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:41.145  [2024-11-05T18:10:11.407Z] Copying: 26/256 [MB] (26 MBps) [2024-11-05T18:10:12.343Z] Copying: 50/256 [MB] (23 MBps) [2024-11-05T18:10:13.281Z] Copying: 73/256 [MB] (23 MBps) [2024-11-05T18:10:14.256Z] Copying: 96/256 [MB] (23 MBps) [2024-11-05T18:10:15.193Z] Copying: 120/256 [MB] (23 MBps) [2024-11-05T18:10:16.131Z] Copying: 143/256 [MB] (23 MBps) [2024-11-05T18:10:17.509Z] Copying: 165/256 [MB] (21 MBps) [2024-11-05T18:10:18.447Z] Copying: 188/256 [MB] (22 MBps) [2024-11-05T18:10:19.385Z] Copying: 211/256 [MB] (23 MBps) [2024-11-05T18:10:19.953Z] Copying: 235/256 [MB] (23 MBps) [2024-11-05T18:10:20.521Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-05 18:10:20.412436] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:51.198 [2024-11-05 18:10:20.428285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.198 [2024-11-05 18:10:20.428340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:51.198 [2024-11-05 18:10:20.428357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:51.198 [2024-11-05 18:10:20.428376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.198 [2024-11-05 18:10:20.428405] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:51.198 [2024-11-05 18:10:20.432726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.198 [2024-11-05 18:10:20.432764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:51.198 [2024-11-05 18:10:20.432779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.297 ms 00:20:51.198 [2024-11-05 18:10:20.432791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.198 [2024-11-05 18:10:20.433044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.198 [2024-11-05 18:10:20.433071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:51.198 [2024-11-05 18:10:20.433084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:20:51.198 [2024-11-05 18:10:20.433096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.198 [2024-11-05 18:10:20.435979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.198 [2024-11-05 18:10:20.436013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:51.198 [2024-11-05 18:10:20.436026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.870 ms 00:20:51.198 [2024-11-05 18:10:20.436037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.198 [2024-11-05 18:10:20.442318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.198 [2024-11-05 18:10:20.442364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:51.198 [2024-11-05 18:10:20.442378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.266 ms 00:20:51.198 [2024-11-05 18:10:20.442389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.198 [2024-11-05 18:10:20.481705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.198 [2024-11-05 18:10:20.481751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:51.198 [2024-11-05 18:10:20.481767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.283 ms 00:20:51.198 [2024-11-05 18:10:20.481778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.198 [2024-11-05 18:10:20.501818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.198 [2024-11-05 18:10:20.501865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:51.198 [2024-11-05 18:10:20.501878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.004 ms 00:20:51.198 [2024-11-05 18:10:20.501892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.198 [2024-11-05 18:10:20.502037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.198 [2024-11-05 18:10:20.502052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:51.198 [2024-11-05 18:10:20.502065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:51.198 [2024-11-05 18:10:20.502075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.458 [2024-11-05 18:10:20.536397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.458 [2024-11-05 18:10:20.536440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:51.458 [2024-11-05 18:10:20.536454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.349 ms 00:20:51.458 [2024-11-05 18:10:20.536464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.458 [2024-11-05 18:10:20.569907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.458 [2024-11-05 18:10:20.569943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:51.458 [2024-11-05 18:10:20.569956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.441 ms 00:20:51.458 [2024-11-05 18:10:20.569965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.458 [2024-11-05 18:10:20.602960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.458 [2024-11-05 18:10:20.602996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:51.458 [2024-11-05 18:10:20.603009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.995 ms 00:20:51.458 [2024-11-05 18:10:20.603019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.458 [2024-11-05 18:10:20.635786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.458 [2024-11-05 18:10:20.635821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:51.458 [2024-11-05 18:10:20.635834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.742 ms 00:20:51.458 [2024-11-05 18:10:20.635843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.458 [2024-11-05 18:10:20.635896] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:51.459 [2024-11-05 18:10:20.635913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.635926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.635938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.635949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.635963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.635974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.635984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.635996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:51.459 [2024-11-05 18:10:20.636828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.636838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.636847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.636858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.636868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.636879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.636889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.636900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.636910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.636921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.636945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.636956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.636968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.636979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.636989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:51.460 [2024-11-05 18:10:20.637007] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:51.460 [2024-11-05 18:10:20.637018] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 18011f82-9c8c-4643-a4b7-0489ece3fd08 00:20:51.460 [2024-11-05 18:10:20.637029] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:51.460 [2024-11-05 18:10:20.637039] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:51.460 [2024-11-05 18:10:20.637048] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:51.460 [2024-11-05 18:10:20.637058] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:51.460 [2024-11-05 18:10:20.637067] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:51.460 [2024-11-05 18:10:20.637086] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:51.460 [2024-11-05 18:10:20.637096] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:51.460 [2024-11-05 18:10:20.637105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:51.460 [2024-11-05 18:10:20.637114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:51.460 [2024-11-05 18:10:20.637124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.460 [2024-11-05 18:10:20.637138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:51.460 [2024-11-05 18:10:20.637149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:20:51.460 [2024-11-05 18:10:20.637158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.460 [2024-11-05 18:10:20.654963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.460 [2024-11-05 18:10:20.654996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:51.460 [2024-11-05 18:10:20.655008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.815 ms 00:20:51.460 [2024-11-05 18:10:20.655017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.460 [2024-11-05 18:10:20.655524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.460 [2024-11-05 18:10:20.655549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:51.460 [2024-11-05 18:10:20.655560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:20:51.460 [2024-11-05 18:10:20.655570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.460 [2024-11-05 18:10:20.705559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.460 [2024-11-05 18:10:20.705593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:51.460 [2024-11-05 18:10:20.705605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.460 [2024-11-05 18:10:20.705616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.460 [2024-11-05 18:10:20.705700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.460 [2024-11-05 18:10:20.705712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:51.460 [2024-11-05 18:10:20.705722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.460 [2024-11-05 18:10:20.705732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.460 [2024-11-05 18:10:20.705778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.460 [2024-11-05 18:10:20.705792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:51.460 [2024-11-05 18:10:20.705802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.460 [2024-11-05 18:10:20.705812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.460 [2024-11-05 18:10:20.705831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.460 [2024-11-05 18:10:20.705845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:51.460 [2024-11-05 18:10:20.705856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.460 [2024-11-05 18:10:20.705866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.719 [2024-11-05 18:10:20.825773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.719 [2024-11-05 18:10:20.825822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:51.719 [2024-11-05 18:10:20.825838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.719 [2024-11-05 18:10:20.825848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.719 [2024-11-05 18:10:20.922828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.719 [2024-11-05 18:10:20.922882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:51.719 [2024-11-05 18:10:20.922896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.719 [2024-11-05 18:10:20.922906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.719 [2024-11-05 18:10:20.922969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.719 [2024-11-05 18:10:20.922980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:51.719 [2024-11-05 18:10:20.922991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.719 [2024-11-05 18:10:20.923001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.719 [2024-11-05 18:10:20.923029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.719 [2024-11-05 18:10:20.923040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:51.719 [2024-11-05 18:10:20.923055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.719 [2024-11-05 18:10:20.923065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.719 [2024-11-05 18:10:20.923169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.719 [2024-11-05 18:10:20.923183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:51.719 [2024-11-05 18:10:20.923194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.719 [2024-11-05 18:10:20.923203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.719 [2024-11-05 18:10:20.923239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.719 [2024-11-05 18:10:20.923251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:51.719 [2024-11-05 18:10:20.923262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.719 [2024-11-05 18:10:20.923277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.719 [2024-11-05 18:10:20.923313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.719 [2024-11-05 18:10:20.923325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:51.719 [2024-11-05 18:10:20.923336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.719 [2024-11-05 18:10:20.923345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.719 [2024-11-05 18:10:20.923385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.719 [2024-11-05 18:10:20.923395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:51.719 [2024-11-05 18:10:20.923428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.719 [2024-11-05 18:10:20.923440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.719 [2024-11-05 18:10:20.923576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 496.108 ms, result 0 00:20:52.657 00:20:52.657 00:20:52.657 18:10:21 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:53.226 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:53.226 18:10:22 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:53.226 18:10:22 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:53.226 18:10:22 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:53.226 18:10:22 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:53.226 18:10:22 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:53.226 18:10:22 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:53.226 18:10:22 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 75690 00:20:53.226 18:10:22 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75690 ']' 00:20:53.226 18:10:22 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75690 00:20:53.226 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (75690) - No such process 00:20:53.226 Process with pid 75690 is not found 00:20:53.226 18:10:22 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 75690 is not found' 00:20:53.226 00:20:53.226 real 1m11.820s 00:20:53.226 user 1m37.944s 00:20:53.226 sys 0m6.687s 00:20:53.226 18:10:22 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:53.226 ************************************ 00:20:53.226 END TEST ftl_trim 00:20:53.226 ************************************ 00:20:53.226 18:10:22 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:53.226 18:10:22 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:53.226 18:10:22 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:53.226 18:10:22 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:53.226 18:10:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:53.226 ************************************ 00:20:53.226 START TEST ftl_restore 00:20:53.226 ************************************ 00:20:53.226 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:53.486 * Looking for test storage... 00:20:53.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:53.486 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:53.486 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:20:53.486 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:53.486 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:53.486 18:10:22 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:53.486 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:53.486 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:53.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.486 --rc genhtml_branch_coverage=1 00:20:53.486 --rc genhtml_function_coverage=1 00:20:53.486 --rc genhtml_legend=1 00:20:53.486 --rc geninfo_all_blocks=1 00:20:53.486 --rc geninfo_unexecuted_blocks=1 00:20:53.486 00:20:53.486 ' 00:20:53.486 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:53.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.486 --rc genhtml_branch_coverage=1 00:20:53.486 --rc genhtml_function_coverage=1 00:20:53.486 --rc genhtml_legend=1 00:20:53.486 --rc geninfo_all_blocks=1 00:20:53.486 --rc geninfo_unexecuted_blocks=1 00:20:53.486 00:20:53.486 ' 00:20:53.486 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:53.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.486 --rc genhtml_branch_coverage=1 00:20:53.486 --rc genhtml_function_coverage=1 00:20:53.486 --rc genhtml_legend=1 00:20:53.486 --rc geninfo_all_blocks=1 00:20:53.486 --rc geninfo_unexecuted_blocks=1 00:20:53.486 00:20:53.486 ' 00:20:53.486 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:53.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.486 --rc genhtml_branch_coverage=1 00:20:53.486 --rc genhtml_function_coverage=1 00:20:53.486 --rc genhtml_legend=1 00:20:53.486 --rc geninfo_all_blocks=1 00:20:53.486 --rc geninfo_unexecuted_blocks=1 00:20:53.486 00:20:53.486 ' 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:53.486 18:10:22 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:53.745 18:10:22 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.lRzDbf3fYM 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=75969 00:20:53.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 75969 00:20:53.746 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 75969 ']' 00:20:53.746 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.746 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:53.746 18:10:22 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:53.746 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.746 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:53.746 18:10:22 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:53.746 [2024-11-05 18:10:22.929529] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:20:53.746 [2024-11-05 18:10:22.929645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75969 ] 00:20:54.005 [2024-11-05 18:10:23.108707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.005 [2024-11-05 18:10:23.214077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.944 18:10:23 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:54.944 18:10:23 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:20:54.944 18:10:24 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:54.944 18:10:24 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:54.944 18:10:24 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:54.944 18:10:24 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:54.944 18:10:24 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:54.944 18:10:24 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:55.203 18:10:24 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:55.203 18:10:24 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:55.203 18:10:24 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:55.203 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:20:55.203 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:55.203 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:55.203 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:55.203 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:55.203 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:55.203 { 00:20:55.203 "name": "nvme0n1", 00:20:55.203 "aliases": [ 00:20:55.203 "ea5c50dd-d44a-46ed-80cb-a2e0e7937696" 00:20:55.203 ], 00:20:55.203 "product_name": "NVMe disk", 00:20:55.203 "block_size": 4096, 00:20:55.203 "num_blocks": 1310720, 00:20:55.203 "uuid": "ea5c50dd-d44a-46ed-80cb-a2e0e7937696", 00:20:55.203 "numa_id": -1, 00:20:55.203 "assigned_rate_limits": { 00:20:55.203 "rw_ios_per_sec": 0, 00:20:55.203 "rw_mbytes_per_sec": 0, 00:20:55.203 "r_mbytes_per_sec": 0, 00:20:55.203 "w_mbytes_per_sec": 0 00:20:55.203 }, 00:20:55.203 "claimed": true, 00:20:55.203 "claim_type": "read_many_write_one", 00:20:55.203 "zoned": false, 00:20:55.203 "supported_io_types": { 00:20:55.203 "read": true, 00:20:55.203 "write": true, 00:20:55.203 "unmap": true, 00:20:55.203 "flush": true, 00:20:55.203 "reset": true, 00:20:55.203 "nvme_admin": true, 00:20:55.203 "nvme_io": true, 00:20:55.203 "nvme_io_md": false, 00:20:55.203 "write_zeroes": true, 00:20:55.203 "zcopy": false, 00:20:55.203 "get_zone_info": false, 00:20:55.204 "zone_management": false, 00:20:55.204 "zone_append": false, 00:20:55.204 "compare": true, 00:20:55.204 "compare_and_write": false, 00:20:55.204 "abort": true, 00:20:55.204 "seek_hole": false, 00:20:55.204 "seek_data": false, 00:20:55.204 "copy": true, 00:20:55.204 "nvme_iov_md": false 00:20:55.204 }, 00:20:55.204 "driver_specific": { 00:20:55.204 "nvme": [ 00:20:55.204 { 00:20:55.204 "pci_address": "0000:00:11.0", 00:20:55.204 "trid": { 00:20:55.204 "trtype": "PCIe", 00:20:55.204 "traddr": "0000:00:11.0" 00:20:55.204 }, 00:20:55.204 "ctrlr_data": { 00:20:55.204 "cntlid": 0, 00:20:55.204 "vendor_id": "0x1b36", 00:20:55.204 "model_number": "QEMU NVMe Ctrl", 00:20:55.204 "serial_number": "12341", 00:20:55.204 "firmware_revision": "8.0.0", 00:20:55.204 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:55.204 "oacs": { 00:20:55.204 "security": 0, 00:20:55.204 "format": 1, 00:20:55.204 "firmware": 0, 00:20:55.204 "ns_manage": 1 00:20:55.204 }, 00:20:55.204 "multi_ctrlr": false, 00:20:55.204 "ana_reporting": false 00:20:55.204 }, 00:20:55.204 "vs": { 00:20:55.204 "nvme_version": "1.4" 00:20:55.204 }, 00:20:55.204 "ns_data": { 00:20:55.204 "id": 1, 00:20:55.204 "can_share": false 00:20:55.204 } 00:20:55.204 } 00:20:55.204 ], 00:20:55.204 "mp_policy": "active_passive" 00:20:55.204 } 00:20:55.204 } 00:20:55.204 ]' 00:20:55.204 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:55.463 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:55.463 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:55.463 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:20:55.463 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:20:55.463 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:20:55.463 18:10:24 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:55.463 18:10:24 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:55.463 18:10:24 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:55.463 18:10:24 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:55.463 18:10:24 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:55.721 18:10:24 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=dc8cbc65-68c0-4998-8c39-e5227a6a8465 00:20:55.721 18:10:24 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:55.721 18:10:24 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc8cbc65-68c0-4998-8c39-e5227a6a8465 00:20:55.721 18:10:25 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:55.980 18:10:25 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=52602b4b-50d9-4cd6-bb96-71b2c0c63c4f 00:20:55.980 18:10:25 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 52602b4b-50d9-4cd6-bb96-71b2c0c63c4f 00:20:56.239 18:10:25 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=d732124b-dc3d-4faa-bf67-108b3083ce0b 00:20:56.239 18:10:25 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:56.239 18:10:25 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d732124b-dc3d-4faa-bf67-108b3083ce0b 00:20:56.239 18:10:25 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:56.239 18:10:25 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:56.239 18:10:25 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=d732124b-dc3d-4faa-bf67-108b3083ce0b 00:20:56.239 18:10:25 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:56.239 18:10:25 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size d732124b-dc3d-4faa-bf67-108b3083ce0b 00:20:56.239 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=d732124b-dc3d-4faa-bf67-108b3083ce0b 00:20:56.239 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:56.239 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:56.239 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:56.239 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d732124b-dc3d-4faa-bf67-108b3083ce0b 00:20:56.499 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:56.499 { 00:20:56.499 "name": "d732124b-dc3d-4faa-bf67-108b3083ce0b", 00:20:56.499 "aliases": [ 00:20:56.499 "lvs/nvme0n1p0" 00:20:56.499 ], 00:20:56.499 "product_name": "Logical Volume", 00:20:56.499 "block_size": 4096, 00:20:56.499 "num_blocks": 26476544, 00:20:56.499 "uuid": "d732124b-dc3d-4faa-bf67-108b3083ce0b", 00:20:56.499 "assigned_rate_limits": { 00:20:56.499 "rw_ios_per_sec": 0, 00:20:56.499 "rw_mbytes_per_sec": 0, 00:20:56.499 "r_mbytes_per_sec": 0, 00:20:56.499 "w_mbytes_per_sec": 0 00:20:56.499 }, 00:20:56.499 "claimed": false, 00:20:56.499 "zoned": false, 00:20:56.499 "supported_io_types": { 00:20:56.499 "read": true, 00:20:56.499 "write": true, 00:20:56.499 "unmap": true, 00:20:56.499 "flush": false, 00:20:56.499 "reset": true, 00:20:56.499 "nvme_admin": false, 00:20:56.499 "nvme_io": false, 00:20:56.499 "nvme_io_md": false, 00:20:56.499 "write_zeroes": true, 00:20:56.499 "zcopy": false, 00:20:56.499 "get_zone_info": false, 00:20:56.499 "zone_management": false, 00:20:56.499 "zone_append": false, 00:20:56.499 "compare": false, 00:20:56.499 "compare_and_write": false, 00:20:56.499 "abort": false, 00:20:56.499 "seek_hole": true, 00:20:56.499 "seek_data": true, 00:20:56.499 "copy": false, 00:20:56.499 "nvme_iov_md": false 00:20:56.499 }, 00:20:56.499 "driver_specific": { 00:20:56.499 "lvol": { 00:20:56.499 "lvol_store_uuid": "52602b4b-50d9-4cd6-bb96-71b2c0c63c4f", 00:20:56.499 "base_bdev": "nvme0n1", 00:20:56.499 "thin_provision": true, 00:20:56.499 "num_allocated_clusters": 0, 00:20:56.499 "snapshot": false, 00:20:56.499 "clone": false, 00:20:56.499 "esnap_clone": false 00:20:56.499 } 00:20:56.499 } 00:20:56.499 } 00:20:56.499 ]' 00:20:56.499 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:56.499 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:56.499 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:56.499 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:56.499 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:56.499 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:20:56.499 18:10:25 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:56.499 18:10:25 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:56.499 18:10:25 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:56.758 18:10:25 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:56.758 18:10:25 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:56.758 18:10:25 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size d732124b-dc3d-4faa-bf67-108b3083ce0b 00:20:56.758 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=d732124b-dc3d-4faa-bf67-108b3083ce0b 00:20:56.758 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:56.758 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:56.758 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:56.758 18:10:25 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d732124b-dc3d-4faa-bf67-108b3083ce0b 00:20:57.018 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:57.018 { 00:20:57.018 "name": "d732124b-dc3d-4faa-bf67-108b3083ce0b", 00:20:57.018 "aliases": [ 00:20:57.018 "lvs/nvme0n1p0" 00:20:57.018 ], 00:20:57.018 "product_name": "Logical Volume", 00:20:57.018 "block_size": 4096, 00:20:57.018 "num_blocks": 26476544, 00:20:57.018 "uuid": "d732124b-dc3d-4faa-bf67-108b3083ce0b", 00:20:57.018 "assigned_rate_limits": { 00:20:57.018 "rw_ios_per_sec": 0, 00:20:57.018 "rw_mbytes_per_sec": 0, 00:20:57.018 "r_mbytes_per_sec": 0, 00:20:57.018 "w_mbytes_per_sec": 0 00:20:57.018 }, 00:20:57.018 "claimed": false, 00:20:57.018 "zoned": false, 00:20:57.018 "supported_io_types": { 00:20:57.018 "read": true, 00:20:57.018 "write": true, 00:20:57.018 "unmap": true, 00:20:57.018 "flush": false, 00:20:57.018 "reset": true, 00:20:57.018 "nvme_admin": false, 00:20:57.018 "nvme_io": false, 00:20:57.018 "nvme_io_md": false, 00:20:57.018 "write_zeroes": true, 00:20:57.018 "zcopy": false, 00:20:57.018 "get_zone_info": false, 00:20:57.018 "zone_management": false, 00:20:57.018 "zone_append": false, 00:20:57.018 "compare": false, 00:20:57.018 "compare_and_write": false, 00:20:57.018 "abort": false, 00:20:57.018 "seek_hole": true, 00:20:57.018 "seek_data": true, 00:20:57.018 "copy": false, 00:20:57.018 "nvme_iov_md": false 00:20:57.018 }, 00:20:57.018 "driver_specific": { 00:20:57.018 "lvol": { 00:20:57.018 "lvol_store_uuid": "52602b4b-50d9-4cd6-bb96-71b2c0c63c4f", 00:20:57.018 "base_bdev": "nvme0n1", 00:20:57.018 "thin_provision": true, 00:20:57.018 "num_allocated_clusters": 0, 00:20:57.018 "snapshot": false, 00:20:57.018 "clone": false, 00:20:57.018 "esnap_clone": false 00:20:57.018 } 00:20:57.018 } 00:20:57.018 } 00:20:57.018 ]' 00:20:57.018 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:57.018 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:57.018 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:57.018 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:57.018 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:57.018 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:20:57.018 18:10:26 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:57.018 18:10:26 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:57.277 18:10:26 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:57.277 18:10:26 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size d732124b-dc3d-4faa-bf67-108b3083ce0b 00:20:57.277 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=d732124b-dc3d-4faa-bf67-108b3083ce0b 00:20:57.277 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:57.277 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:57.277 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:57.277 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d732124b-dc3d-4faa-bf67-108b3083ce0b 00:20:57.537 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:57.537 { 00:20:57.537 "name": "d732124b-dc3d-4faa-bf67-108b3083ce0b", 00:20:57.537 "aliases": [ 00:20:57.537 "lvs/nvme0n1p0" 00:20:57.537 ], 00:20:57.537 "product_name": "Logical Volume", 00:20:57.537 "block_size": 4096, 00:20:57.537 "num_blocks": 26476544, 00:20:57.537 "uuid": "d732124b-dc3d-4faa-bf67-108b3083ce0b", 00:20:57.537 "assigned_rate_limits": { 00:20:57.537 "rw_ios_per_sec": 0, 00:20:57.537 "rw_mbytes_per_sec": 0, 00:20:57.537 "r_mbytes_per_sec": 0, 00:20:57.537 "w_mbytes_per_sec": 0 00:20:57.537 }, 00:20:57.537 "claimed": false, 00:20:57.537 "zoned": false, 00:20:57.537 "supported_io_types": { 00:20:57.537 "read": true, 00:20:57.537 "write": true, 00:20:57.537 "unmap": true, 00:20:57.537 "flush": false, 00:20:57.537 "reset": true, 00:20:57.537 "nvme_admin": false, 00:20:57.537 "nvme_io": false, 00:20:57.537 "nvme_io_md": false, 00:20:57.537 "write_zeroes": true, 00:20:57.537 "zcopy": false, 00:20:57.537 "get_zone_info": false, 00:20:57.537 "zone_management": false, 00:20:57.537 "zone_append": false, 00:20:57.537 "compare": false, 00:20:57.537 "compare_and_write": false, 00:20:57.537 "abort": false, 00:20:57.537 "seek_hole": true, 00:20:57.537 "seek_data": true, 00:20:57.537 "copy": false, 00:20:57.537 "nvme_iov_md": false 00:20:57.537 }, 00:20:57.537 "driver_specific": { 00:20:57.537 "lvol": { 00:20:57.537 "lvol_store_uuid": "52602b4b-50d9-4cd6-bb96-71b2c0c63c4f", 00:20:57.537 "base_bdev": "nvme0n1", 00:20:57.537 "thin_provision": true, 00:20:57.537 "num_allocated_clusters": 0, 00:20:57.537 "snapshot": false, 00:20:57.537 "clone": false, 00:20:57.537 "esnap_clone": false 00:20:57.537 } 00:20:57.537 } 00:20:57.537 } 00:20:57.537 ]' 00:20:57.537 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:57.537 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:57.537 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:57.537 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:57.537 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:57.537 18:10:26 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:20:57.537 18:10:26 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:57.537 18:10:26 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d732124b-dc3d-4faa-bf67-108b3083ce0b --l2p_dram_limit 10' 00:20:57.537 18:10:26 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:57.537 18:10:26 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:57.537 18:10:26 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:57.537 18:10:26 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:57.537 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:57.537 18:10:26 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d732124b-dc3d-4faa-bf67-108b3083ce0b --l2p_dram_limit 10 -c nvc0n1p0 00:20:57.798 [2024-11-05 18:10:26.984281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.798 [2024-11-05 18:10:26.984470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:57.798 [2024-11-05 18:10:26.984499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:57.798 [2024-11-05 18:10:26.984511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.798 [2024-11-05 18:10:26.984583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.798 [2024-11-05 18:10:26.984595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:57.798 [2024-11-05 18:10:26.984609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:57.798 [2024-11-05 18:10:26.984620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.798 [2024-11-05 18:10:26.984650] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:57.798 [2024-11-05 18:10:26.985712] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:57.798 [2024-11-05 18:10:26.985740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.798 [2024-11-05 18:10:26.985752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:57.798 [2024-11-05 18:10:26.985766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:20:57.798 [2024-11-05 18:10:26.985777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.798 [2024-11-05 18:10:26.985857] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5f1fabc7-dfe3-4e4a-be3d-af24c10698b1 00:20:57.798 [2024-11-05 18:10:26.987313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.798 [2024-11-05 18:10:26.987354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:57.798 [2024-11-05 18:10:26.987366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:57.798 [2024-11-05 18:10:26.987379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.798 [2024-11-05 18:10:26.994872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.798 [2024-11-05 18:10:26.995045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:57.798 [2024-11-05 18:10:26.995068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.455 ms 00:20:57.798 [2024-11-05 18:10:26.995081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.798 [2024-11-05 18:10:26.995185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.798 [2024-11-05 18:10:26.995202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:57.798 [2024-11-05 18:10:26.995213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:57.798 [2024-11-05 18:10:26.995230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.798 [2024-11-05 18:10:26.995293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.798 [2024-11-05 18:10:26.995308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:57.798 [2024-11-05 18:10:26.995319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:57.798 [2024-11-05 18:10:26.995335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.798 [2024-11-05 18:10:26.995358] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:57.798 [2024-11-05 18:10:27.000365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.798 [2024-11-05 18:10:27.000398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:57.798 [2024-11-05 18:10:27.000421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.019 ms 00:20:57.798 [2024-11-05 18:10:27.000432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.798 [2024-11-05 18:10:27.000465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.798 [2024-11-05 18:10:27.000476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:57.798 [2024-11-05 18:10:27.000488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:57.798 [2024-11-05 18:10:27.000498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.798 [2024-11-05 18:10:27.000533] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:57.798 [2024-11-05 18:10:27.000652] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:57.798 [2024-11-05 18:10:27.000672] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:57.798 [2024-11-05 18:10:27.000684] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:57.798 [2024-11-05 18:10:27.000699] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:57.798 [2024-11-05 18:10:27.000711] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:57.798 [2024-11-05 18:10:27.000726] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:57.799 [2024-11-05 18:10:27.000736] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:57.799 [2024-11-05 18:10:27.000751] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:57.799 [2024-11-05 18:10:27.000760] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:57.799 [2024-11-05 18:10:27.000773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.799 [2024-11-05 18:10:27.000783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:57.799 [2024-11-05 18:10:27.000795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:20:57.799 [2024-11-05 18:10:27.000815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.799 [2024-11-05 18:10:27.000885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.799 [2024-11-05 18:10:27.000897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:57.799 [2024-11-05 18:10:27.000909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:57.799 [2024-11-05 18:10:27.000919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.799 [2024-11-05 18:10:27.001005] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:57.799 [2024-11-05 18:10:27.001018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:57.799 [2024-11-05 18:10:27.001031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:57.799 [2024-11-05 18:10:27.001041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.799 [2024-11-05 18:10:27.001054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:57.799 [2024-11-05 18:10:27.001063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:57.799 [2024-11-05 18:10:27.001075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:57.799 [2024-11-05 18:10:27.001084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:57.799 [2024-11-05 18:10:27.001096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:57.799 [2024-11-05 18:10:27.001105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:57.799 [2024-11-05 18:10:27.001117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:57.799 [2024-11-05 18:10:27.001127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:57.799 [2024-11-05 18:10:27.001139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:57.799 [2024-11-05 18:10:27.001149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:57.799 [2024-11-05 18:10:27.001160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:57.799 [2024-11-05 18:10:27.001169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.799 [2024-11-05 18:10:27.001183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:57.799 [2024-11-05 18:10:27.001191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:57.799 [2024-11-05 18:10:27.001204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.799 [2024-11-05 18:10:27.001213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:57.799 [2024-11-05 18:10:27.001224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:57.799 [2024-11-05 18:10:27.001233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.799 [2024-11-05 18:10:27.001244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:57.799 [2024-11-05 18:10:27.001253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:57.799 [2024-11-05 18:10:27.001264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.799 [2024-11-05 18:10:27.001272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:57.799 [2024-11-05 18:10:27.001283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:57.799 [2024-11-05 18:10:27.001291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.799 [2024-11-05 18:10:27.001302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:57.799 [2024-11-05 18:10:27.001311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:57.799 [2024-11-05 18:10:27.001321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.799 [2024-11-05 18:10:27.001330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:57.799 [2024-11-05 18:10:27.001343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:57.799 [2024-11-05 18:10:27.001353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:57.799 [2024-11-05 18:10:27.001364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:57.799 [2024-11-05 18:10:27.001373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:57.799 [2024-11-05 18:10:27.001383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:57.799 [2024-11-05 18:10:27.001392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:57.799 [2024-11-05 18:10:27.001403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:57.799 [2024-11-05 18:10:27.001424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.799 [2024-11-05 18:10:27.001435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:57.799 [2024-11-05 18:10:27.001444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:57.799 [2024-11-05 18:10:27.001456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.799 [2024-11-05 18:10:27.001464] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:57.799 [2024-11-05 18:10:27.001477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:57.799 [2024-11-05 18:10:27.001488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:57.799 [2024-11-05 18:10:27.001502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.799 [2024-11-05 18:10:27.001512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:57.799 [2024-11-05 18:10:27.001526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:57.799 [2024-11-05 18:10:27.001535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:57.799 [2024-11-05 18:10:27.001546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:57.799 [2024-11-05 18:10:27.001555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:57.799 [2024-11-05 18:10:27.001582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:57.799 [2024-11-05 18:10:27.001595] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:57.799 [2024-11-05 18:10:27.001610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:57.799 [2024-11-05 18:10:27.001623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:57.799 [2024-11-05 18:10:27.001636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:57.799 [2024-11-05 18:10:27.001645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:57.799 [2024-11-05 18:10:27.001657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:57.799 [2024-11-05 18:10:27.001667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:57.799 [2024-11-05 18:10:27.001686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:57.799 [2024-11-05 18:10:27.001697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:57.799 [2024-11-05 18:10:27.001709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:57.799 [2024-11-05 18:10:27.001719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:57.799 [2024-11-05 18:10:27.001733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:57.799 [2024-11-05 18:10:27.001743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:57.799 [2024-11-05 18:10:27.001755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:57.799 [2024-11-05 18:10:27.001765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:57.799 [2024-11-05 18:10:27.001778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:57.799 [2024-11-05 18:10:27.001788] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:57.799 [2024-11-05 18:10:27.001802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:57.799 [2024-11-05 18:10:27.001813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:57.799 [2024-11-05 18:10:27.001825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:57.799 [2024-11-05 18:10:27.001834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:57.799 [2024-11-05 18:10:27.001846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:57.799 [2024-11-05 18:10:27.001856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.799 [2024-11-05 18:10:27.001868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:57.799 [2024-11-05 18:10:27.001896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:20:57.799 [2024-11-05 18:10:27.001909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.799 [2024-11-05 18:10:27.001950] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:57.799 [2024-11-05 18:10:27.001967] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:02.026 [2024-11-05 18:10:30.672894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.026 [2024-11-05 18:10:30.672956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:02.026 [2024-11-05 18:10:30.672973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3676.901 ms 00:21:02.026 [2024-11-05 18:10:30.672986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.026 [2024-11-05 18:10:30.711114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.026 [2024-11-05 18:10:30.711164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.026 [2024-11-05 18:10:30.711179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.883 ms 00:21:02.026 [2024-11-05 18:10:30.711193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.026 [2024-11-05 18:10:30.711320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.026 [2024-11-05 18:10:30.711339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:02.026 [2024-11-05 18:10:30.711350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:02.026 [2024-11-05 18:10:30.711365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.026 [2024-11-05 18:10:30.753939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.026 [2024-11-05 18:10:30.754135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.026 [2024-11-05 18:10:30.754158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.574 ms 00:21:02.026 [2024-11-05 18:10:30.754171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.026 [2024-11-05 18:10:30.754208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.026 [2024-11-05 18:10:30.754227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.026 [2024-11-05 18:10:30.754238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:02.026 [2024-11-05 18:10:30.754251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.026 [2024-11-05 18:10:30.754783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.026 [2024-11-05 18:10:30.754805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.026 [2024-11-05 18:10:30.754817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:21:02.026 [2024-11-05 18:10:30.754829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.026 [2024-11-05 18:10:30.754929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.026 [2024-11-05 18:10:30.754944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.026 [2024-11-05 18:10:30.754958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:21:02.026 [2024-11-05 18:10:30.754974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.026 [2024-11-05 18:10:30.775465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.026 [2024-11-05 18:10:30.775505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.026 [2024-11-05 18:10:30.775518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.503 ms 00:21:02.026 [2024-11-05 18:10:30.775531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.026 [2024-11-05 18:10:30.787228] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:02.026 [2024-11-05 18:10:30.790525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.026 [2024-11-05 18:10:30.790554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:02.026 [2024-11-05 18:10:30.790568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.938 ms 00:21:02.026 [2024-11-05 18:10:30.790578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.026 [2024-11-05 18:10:30.892890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.026 [2024-11-05 18:10:30.892940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:02.026 [2024-11-05 18:10:30.892960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.445 ms 00:21:02.026 [2024-11-05 18:10:30.892971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.026 [2024-11-05 18:10:30.893142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.026 [2024-11-05 18:10:30.893158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:02.026 [2024-11-05 18:10:30.893175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:21:02.026 [2024-11-05 18:10:30.893184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.026 [2024-11-05 18:10:30.927891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.026 [2024-11-05 18:10:30.927929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:02.026 [2024-11-05 18:10:30.927946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.711 ms 00:21:02.026 [2024-11-05 18:10:30.927956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.026 [2024-11-05 18:10:30.961675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.026 [2024-11-05 18:10:30.961716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:02.026 [2024-11-05 18:10:30.961733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.726 ms 00:21:02.026 [2024-11-05 18:10:30.961743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.026 [2024-11-05 18:10:30.962461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.026 [2024-11-05 18:10:30.962484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:02.026 [2024-11-05 18:10:30.962498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:21:02.026 [2024-11-05 18:10:30.962508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.026 [2024-11-05 18:10:31.059976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.027 [2024-11-05 18:10:31.060014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:02.027 [2024-11-05 18:10:31.060034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.554 ms 00:21:02.027 [2024-11-05 18:10:31.060045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.027 [2024-11-05 18:10:31.095136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.027 [2024-11-05 18:10:31.095357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:02.027 [2024-11-05 18:10:31.095384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.068 ms 00:21:02.027 [2024-11-05 18:10:31.095395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.027 [2024-11-05 18:10:31.129627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.027 [2024-11-05 18:10:31.129663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:02.027 [2024-11-05 18:10:31.129687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.227 ms 00:21:02.027 [2024-11-05 18:10:31.129697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.027 [2024-11-05 18:10:31.165653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.027 [2024-11-05 18:10:31.165697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:02.027 [2024-11-05 18:10:31.165714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.970 ms 00:21:02.027 [2024-11-05 18:10:31.165724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.027 [2024-11-05 18:10:31.165769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.027 [2024-11-05 18:10:31.165782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:02.027 [2024-11-05 18:10:31.165798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:02.027 [2024-11-05 18:10:31.165809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.027 [2024-11-05 18:10:31.165908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.027 [2024-11-05 18:10:31.165921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:02.027 [2024-11-05 18:10:31.165937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:02.027 [2024-11-05 18:10:31.165947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.027 [2024-11-05 18:10:31.166910] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4189.014 ms, result 0 00:21:02.027 { 00:21:02.027 "name": "ftl0", 00:21:02.027 "uuid": "5f1fabc7-dfe3-4e4a-be3d-af24c10698b1" 00:21:02.027 } 00:21:02.027 18:10:31 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:02.027 18:10:31 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:02.292 18:10:31 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:02.292 18:10:31 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:02.292 [2024-11-05 18:10:31.585891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.292 [2024-11-05 18:10:31.585937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:02.292 [2024-11-05 18:10:31.585950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:02.292 [2024-11-05 18:10:31.585971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.292 [2024-11-05 18:10:31.585995] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:02.292 [2024-11-05 18:10:31.590082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.292 [2024-11-05 18:10:31.590114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:02.292 [2024-11-05 18:10:31.590128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.072 ms 00:21:02.292 [2024-11-05 18:10:31.590138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.292 [2024-11-05 18:10:31.590365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.292 [2024-11-05 18:10:31.590380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:02.292 [2024-11-05 18:10:31.590396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:21:02.292 [2024-11-05 18:10:31.590406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.292 [2024-11-05 18:10:31.592847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.292 [2024-11-05 18:10:31.592871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:02.292 [2024-11-05 18:10:31.592885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.408 ms 00:21:02.292 [2024-11-05 18:10:31.592895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.292 [2024-11-05 18:10:31.597686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.292 [2024-11-05 18:10:31.597717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:02.292 [2024-11-05 18:10:31.597734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.760 ms 00:21:02.292 [2024-11-05 18:10:31.597743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.553 [2024-11-05 18:10:31.632675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.553 [2024-11-05 18:10:31.632710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:02.553 [2024-11-05 18:10:31.632726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.922 ms 00:21:02.553 [2024-11-05 18:10:31.632736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.553 [2024-11-05 18:10:31.653400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.553 [2024-11-05 18:10:31.653443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:02.553 [2024-11-05 18:10:31.653459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.651 ms 00:21:02.553 [2024-11-05 18:10:31.653469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.553 [2024-11-05 18:10:31.653610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.553 [2024-11-05 18:10:31.653625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:02.553 [2024-11-05 18:10:31.653638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:02.553 [2024-11-05 18:10:31.653648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.553 [2024-11-05 18:10:31.687844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.553 [2024-11-05 18:10:31.687879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:02.553 [2024-11-05 18:10:31.687895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.230 ms 00:21:02.553 [2024-11-05 18:10:31.687905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.553 [2024-11-05 18:10:31.723400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.553 [2024-11-05 18:10:31.723445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:02.553 [2024-11-05 18:10:31.723462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.509 ms 00:21:02.553 [2024-11-05 18:10:31.723472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.553 [2024-11-05 18:10:31.758389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.553 [2024-11-05 18:10:31.758434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:02.553 [2024-11-05 18:10:31.758466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.928 ms 00:21:02.553 [2024-11-05 18:10:31.758476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.553 [2024-11-05 18:10:31.792256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.553 [2024-11-05 18:10:31.792291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:02.553 [2024-11-05 18:10:31.792307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.728 ms 00:21:02.553 [2024-11-05 18:10:31.792316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.553 [2024-11-05 18:10:31.792356] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:02.553 [2024-11-05 18:10:31.792371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:02.553 [2024-11-05 18:10:31.792706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.792980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:02.554 [2024-11-05 18:10:31.793626] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:02.554 [2024-11-05 18:10:31.793642] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f1fabc7-dfe3-4e4a-be3d-af24c10698b1 00:21:02.554 [2024-11-05 18:10:31.793653] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:02.554 [2024-11-05 18:10:31.793668] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:02.554 [2024-11-05 18:10:31.793678] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:02.554 [2024-11-05 18:10:31.793703] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:02.554 [2024-11-05 18:10:31.793713] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:02.554 [2024-11-05 18:10:31.793726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:02.554 [2024-11-05 18:10:31.793735] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:02.554 [2024-11-05 18:10:31.793747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:02.554 [2024-11-05 18:10:31.793756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:02.554 [2024-11-05 18:10:31.793768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.554 [2024-11-05 18:10:31.793779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:02.554 [2024-11-05 18:10:31.793792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.416 ms 00:21:02.554 [2024-11-05 18:10:31.793801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.554 [2024-11-05 18:10:31.812837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.554 [2024-11-05 18:10:31.812870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:02.554 [2024-11-05 18:10:31.812885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.010 ms 00:21:02.554 [2024-11-05 18:10:31.812894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.554 [2024-11-05 18:10:31.813433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:02.554 [2024-11-05 18:10:31.813462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:02.554 [2024-11-05 18:10:31.813476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:21:02.555 [2024-11-05 18:10:31.813488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.555 [2024-11-05 18:10:31.874569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.555 [2024-11-05 18:10:31.874605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:02.555 [2024-11-05 18:10:31.874620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.555 [2024-11-05 18:10:31.874630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.555 [2024-11-05 18:10:31.874684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.555 [2024-11-05 18:10:31.874694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:02.555 [2024-11-05 18:10:31.874707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.555 [2024-11-05 18:10:31.874719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.555 [2024-11-05 18:10:31.874794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.555 [2024-11-05 18:10:31.874808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:02.555 [2024-11-05 18:10:31.874820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.555 [2024-11-05 18:10:31.874829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.555 [2024-11-05 18:10:31.874853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.555 [2024-11-05 18:10:31.874863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:02.555 [2024-11-05 18:10:31.874875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.555 [2024-11-05 18:10:31.874885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.814 [2024-11-05 18:10:31.990185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.814 [2024-11-05 18:10:31.990449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:02.814 [2024-11-05 18:10:31.990478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.814 [2024-11-05 18:10:31.990490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.814 [2024-11-05 18:10:32.085904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.814 [2024-11-05 18:10:32.085948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:02.814 [2024-11-05 18:10:32.085964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.814 [2024-11-05 18:10:32.085978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.814 [2024-11-05 18:10:32.086081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.814 [2024-11-05 18:10:32.086093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:02.814 [2024-11-05 18:10:32.086107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.814 [2024-11-05 18:10:32.086117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.814 [2024-11-05 18:10:32.086172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.814 [2024-11-05 18:10:32.086183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:02.814 [2024-11-05 18:10:32.086196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.814 [2024-11-05 18:10:32.086206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.814 [2024-11-05 18:10:32.086320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.814 [2024-11-05 18:10:32.086334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:02.814 [2024-11-05 18:10:32.086347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.814 [2024-11-05 18:10:32.086357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.814 [2024-11-05 18:10:32.086395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.814 [2024-11-05 18:10:32.086406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:02.814 [2024-11-05 18:10:32.086442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.814 [2024-11-05 18:10:32.086452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.814 [2024-11-05 18:10:32.086508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.814 [2024-11-05 18:10:32.086524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:02.814 [2024-11-05 18:10:32.086537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.814 [2024-11-05 18:10:32.086546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.814 [2024-11-05 18:10:32.086593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:02.814 [2024-11-05 18:10:32.086606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:02.814 [2024-11-05 18:10:32.086618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:02.814 [2024-11-05 18:10:32.086628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:02.814 [2024-11-05 18:10:32.086755] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 501.641 ms, result 0 00:21:02.814 true 00:21:02.814 18:10:32 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 75969 00:21:02.814 18:10:32 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 75969 ']' 00:21:02.814 18:10:32 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 75969 00:21:02.814 18:10:32 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:21:02.814 18:10:32 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:02.814 18:10:32 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75969 00:21:03.074 killing process with pid 75969 00:21:03.074 18:10:32 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:03.074 18:10:32 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:03.074 18:10:32 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75969' 00:21:03.074 18:10:32 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 75969 00:21:03.074 18:10:32 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 75969 00:21:08.349 18:10:37 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:12.542 262144+0 records in 00:21:12.542 262144+0 records out 00:21:12.542 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.993 s, 269 MB/s 00:21:12.542 18:10:41 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:13.480 18:10:42 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:13.739 [2024-11-05 18:10:42.864115] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:21:13.739 [2024-11-05 18:10:42.864254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76207 ] 00:21:13.739 [2024-11-05 18:10:43.055299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.999 [2024-11-05 18:10:43.163096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.259 [2024-11-05 18:10:43.501809] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:14.259 [2024-11-05 18:10:43.501874] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:14.520 [2024-11-05 18:10:43.668738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.520 [2024-11-05 18:10:43.668784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:14.520 [2024-11-05 18:10:43.668806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:14.520 [2024-11-05 18:10:43.668816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.520 [2024-11-05 18:10:43.668857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.520 [2024-11-05 18:10:43.668869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:14.520 [2024-11-05 18:10:43.668882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:14.520 [2024-11-05 18:10:43.668891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.520 [2024-11-05 18:10:43.668910] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:14.520 [2024-11-05 18:10:43.669790] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:14.520 [2024-11-05 18:10:43.669814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.520 [2024-11-05 18:10:43.669825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:14.520 [2024-11-05 18:10:43.669836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:21:14.520 [2024-11-05 18:10:43.669846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.520 [2024-11-05 18:10:43.671240] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:14.520 [2024-11-05 18:10:43.689529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.520 [2024-11-05 18:10:43.689565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:14.520 [2024-11-05 18:10:43.689579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.319 ms 00:21:14.520 [2024-11-05 18:10:43.689590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.520 [2024-11-05 18:10:43.689659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.520 [2024-11-05 18:10:43.689672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:14.520 [2024-11-05 18:10:43.689682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:14.520 [2024-11-05 18:10:43.689699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.520 [2024-11-05 18:10:43.696474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.520 [2024-11-05 18:10:43.696502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:14.520 [2024-11-05 18:10:43.696513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.714 ms 00:21:14.520 [2024-11-05 18:10:43.696523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.520 [2024-11-05 18:10:43.696626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.520 [2024-11-05 18:10:43.696640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:14.520 [2024-11-05 18:10:43.696651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:14.520 [2024-11-05 18:10:43.696661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.520 [2024-11-05 18:10:43.696697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.520 [2024-11-05 18:10:43.696708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:14.520 [2024-11-05 18:10:43.696718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:14.520 [2024-11-05 18:10:43.696727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.520 [2024-11-05 18:10:43.696750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:14.520 [2024-11-05 18:10:43.701360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.520 [2024-11-05 18:10:43.701539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:14.520 [2024-11-05 18:10:43.701560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.622 ms 00:21:14.520 [2024-11-05 18:10:43.701581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.520 [2024-11-05 18:10:43.701614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.520 [2024-11-05 18:10:43.701625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:14.520 [2024-11-05 18:10:43.701637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:14.520 [2024-11-05 18:10:43.701646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.520 [2024-11-05 18:10:43.701707] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:14.520 [2024-11-05 18:10:43.701735] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:14.520 [2024-11-05 18:10:43.701771] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:14.520 [2024-11-05 18:10:43.701795] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:14.520 [2024-11-05 18:10:43.701884] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:14.520 [2024-11-05 18:10:43.701898] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:14.520 [2024-11-05 18:10:43.701912] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:14.520 [2024-11-05 18:10:43.701925] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:14.520 [2024-11-05 18:10:43.701938] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:14.520 [2024-11-05 18:10:43.701949] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:14.520 [2024-11-05 18:10:43.701959] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:14.520 [2024-11-05 18:10:43.701970] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:14.520 [2024-11-05 18:10:43.701980] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:14.520 [2024-11-05 18:10:43.701997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.520 [2024-11-05 18:10:43.702007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:14.520 [2024-11-05 18:10:43.702019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:21:14.520 [2024-11-05 18:10:43.702029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.520 [2024-11-05 18:10:43.702102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.520 [2024-11-05 18:10:43.702114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:14.520 [2024-11-05 18:10:43.702125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:14.520 [2024-11-05 18:10:43.702134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.520 [2024-11-05 18:10:43.702229] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:14.520 [2024-11-05 18:10:43.702250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:14.520 [2024-11-05 18:10:43.702263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:14.520 [2024-11-05 18:10:43.702273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:14.520 [2024-11-05 18:10:43.702283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:14.520 [2024-11-05 18:10:43.702293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:14.520 [2024-11-05 18:10:43.702304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:14.520 [2024-11-05 18:10:43.702313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:14.520 [2024-11-05 18:10:43.702325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:14.520 [2024-11-05 18:10:43.702335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:14.520 [2024-11-05 18:10:43.702345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:14.520 [2024-11-05 18:10:43.702355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:14.520 [2024-11-05 18:10:43.702364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:14.520 [2024-11-05 18:10:43.702373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:14.520 [2024-11-05 18:10:43.702383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:14.520 [2024-11-05 18:10:43.702404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:14.520 [2024-11-05 18:10:43.702431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:14.520 [2024-11-05 18:10:43.702440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:14.520 [2024-11-05 18:10:43.702450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:14.520 [2024-11-05 18:10:43.702460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:14.520 [2024-11-05 18:10:43.702470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:14.520 [2024-11-05 18:10:43.702479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:14.520 [2024-11-05 18:10:43.702488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:14.520 [2024-11-05 18:10:43.702498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:14.520 [2024-11-05 18:10:43.702507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:14.520 [2024-11-05 18:10:43.702516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:14.520 [2024-11-05 18:10:43.702525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:14.520 [2024-11-05 18:10:43.702534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:14.521 [2024-11-05 18:10:43.702543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:14.521 [2024-11-05 18:10:43.702552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:14.521 [2024-11-05 18:10:43.702561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:14.521 [2024-11-05 18:10:43.702570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:14.521 [2024-11-05 18:10:43.702579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:14.521 [2024-11-05 18:10:43.702588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:14.521 [2024-11-05 18:10:43.702597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:14.521 [2024-11-05 18:10:43.702607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:14.521 [2024-11-05 18:10:43.702616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:14.521 [2024-11-05 18:10:43.702625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:14.521 [2024-11-05 18:10:43.702634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:14.521 [2024-11-05 18:10:43.702643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:14.521 [2024-11-05 18:10:43.702653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:14.521 [2024-11-05 18:10:43.702662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:14.521 [2024-11-05 18:10:43.702671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:14.521 [2024-11-05 18:10:43.702692] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:14.521 [2024-11-05 18:10:43.702703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:14.521 [2024-11-05 18:10:43.702712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:14.521 [2024-11-05 18:10:43.702722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:14.521 [2024-11-05 18:10:43.702732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:14.521 [2024-11-05 18:10:43.702741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:14.521 [2024-11-05 18:10:43.702750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:14.521 [2024-11-05 18:10:43.702759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:14.521 [2024-11-05 18:10:43.702768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:14.521 [2024-11-05 18:10:43.702777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:14.521 [2024-11-05 18:10:43.702787] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:14.521 [2024-11-05 18:10:43.702799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:14.521 [2024-11-05 18:10:43.702811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:14.521 [2024-11-05 18:10:43.702821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:14.521 [2024-11-05 18:10:43.702831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:14.521 [2024-11-05 18:10:43.702841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:14.521 [2024-11-05 18:10:43.702851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:14.521 [2024-11-05 18:10:43.702861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:14.521 [2024-11-05 18:10:43.702871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:14.521 [2024-11-05 18:10:43.702881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:14.521 [2024-11-05 18:10:43.702890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:14.521 [2024-11-05 18:10:43.702900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:14.521 [2024-11-05 18:10:43.702910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:14.521 [2024-11-05 18:10:43.702920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:14.521 [2024-11-05 18:10:43.702930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:14.521 [2024-11-05 18:10:43.702940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:14.521 [2024-11-05 18:10:43.702960] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:14.521 [2024-11-05 18:10:43.702977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:14.521 [2024-11-05 18:10:43.702988] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:14.521 [2024-11-05 18:10:43.702997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:14.521 [2024-11-05 18:10:43.703009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:14.521 [2024-11-05 18:10:43.703021] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:14.521 [2024-11-05 18:10:43.703032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.521 [2024-11-05 18:10:43.703042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:14.521 [2024-11-05 18:10:43.703052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.858 ms 00:21:14.521 [2024-11-05 18:10:43.703061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.521 [2024-11-05 18:10:43.742945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.521 [2024-11-05 18:10:43.743107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:14.521 [2024-11-05 18:10:43.743250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.905 ms 00:21:14.521 [2024-11-05 18:10:43.743288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.521 [2024-11-05 18:10:43.743396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.521 [2024-11-05 18:10:43.743520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:14.521 [2024-11-05 18:10:43.743560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:14.521 [2024-11-05 18:10:43.743591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.521 [2024-11-05 18:10:43.802190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.521 [2024-11-05 18:10:43.802362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:14.521 [2024-11-05 18:10:43.802505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.576 ms 00:21:14.521 [2024-11-05 18:10:43.802545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.521 [2024-11-05 18:10:43.802600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.521 [2024-11-05 18:10:43.802633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:14.521 [2024-11-05 18:10:43.802721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:14.521 [2024-11-05 18:10:43.802763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.521 [2024-11-05 18:10:43.803275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.521 [2024-11-05 18:10:43.803401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:14.521 [2024-11-05 18:10:43.803499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:21:14.521 [2024-11-05 18:10:43.803534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.521 [2024-11-05 18:10:43.803688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.521 [2024-11-05 18:10:43.803724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:14.521 [2024-11-05 18:10:43.803793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:21:14.521 [2024-11-05 18:10:43.803833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.521 [2024-11-05 18:10:43.822781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.521 [2024-11-05 18:10:43.822931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:14.521 [2024-11-05 18:10:43.823016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.934 ms 00:21:14.521 [2024-11-05 18:10:43.823052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.521 [2024-11-05 18:10:43.841417] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:14.521 [2024-11-05 18:10:43.841576] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:14.521 [2024-11-05 18:10:43.841752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.521 [2024-11-05 18:10:43.841786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:14.521 [2024-11-05 18:10:43.841817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.614 ms 00:21:14.521 [2024-11-05 18:10:43.841846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.780 [2024-11-05 18:10:43.870247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.780 [2024-11-05 18:10:43.870372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:14.780 [2024-11-05 18:10:43.870505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.297 ms 00:21:14.780 [2024-11-05 18:10:43.870543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.780 [2024-11-05 18:10:43.888096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.780 [2024-11-05 18:10:43.888234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:14.780 [2024-11-05 18:10:43.888359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.486 ms 00:21:14.780 [2024-11-05 18:10:43.888396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.780 [2024-11-05 18:10:43.905482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.780 [2024-11-05 18:10:43.905604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:14.780 [2024-11-05 18:10:43.905701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.040 ms 00:21:14.780 [2024-11-05 18:10:43.905738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.780 [2024-11-05 18:10:43.906585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.780 [2024-11-05 18:10:43.906705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:14.780 [2024-11-05 18:10:43.906780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:21:14.780 [2024-11-05 18:10:43.906814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.780 [2024-11-05 18:10:43.989047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.780 [2024-11-05 18:10:43.989269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:14.780 [2024-11-05 18:10:43.989401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.308 ms 00:21:14.780 [2024-11-05 18:10:43.989462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.780 [2024-11-05 18:10:44.000038] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:14.780 [2024-11-05 18:10:44.002557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.780 [2024-11-05 18:10:44.002694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:14.780 [2024-11-05 18:10:44.002830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.053 ms 00:21:14.780 [2024-11-05 18:10:44.002869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.780 [2024-11-05 18:10:44.002975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.780 [2024-11-05 18:10:44.003061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:14.780 [2024-11-05 18:10:44.003100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:14.780 [2024-11-05 18:10:44.003130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.780 [2024-11-05 18:10:44.003277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.780 [2024-11-05 18:10:44.003368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:14.780 [2024-11-05 18:10:44.003481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:14.780 [2024-11-05 18:10:44.003518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.780 [2024-11-05 18:10:44.003572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.780 [2024-11-05 18:10:44.003708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:14.780 [2024-11-05 18:10:44.003765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:14.780 [2024-11-05 18:10:44.003794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.780 [2024-11-05 18:10:44.003860] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:14.780 [2024-11-05 18:10:44.003897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.780 [2024-11-05 18:10:44.003919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:14.780 [2024-11-05 18:10:44.003931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:14.780 [2024-11-05 18:10:44.003941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.780 [2024-11-05 18:10:44.039618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.780 [2024-11-05 18:10:44.039781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:14.780 [2024-11-05 18:10:44.039909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.712 ms 00:21:14.780 [2024-11-05 18:10:44.039948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.780 [2024-11-05 18:10:44.040057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.780 [2024-11-05 18:10:44.040097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:14.780 [2024-11-05 18:10:44.040177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:14.780 [2024-11-05 18:10:44.040212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.780 [2024-11-05 18:10:44.041376] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.754 ms, result 0 00:21:16.159  [2024-11-05T18:10:46.051Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-05T18:10:47.432Z] Copying: 48/1024 [MB] (23 MBps) [2024-11-05T18:10:48.370Z] Copying: 72/1024 [MB] (23 MBps) [2024-11-05T18:10:49.309Z] Copying: 95/1024 [MB] (23 MBps) [2024-11-05T18:10:50.245Z] Copying: 119/1024 [MB] (23 MBps) [2024-11-05T18:10:51.183Z] Copying: 142/1024 [MB] (23 MBps) [2024-11-05T18:10:52.145Z] Copying: 166/1024 [MB] (24 MBps) [2024-11-05T18:10:53.098Z] Copying: 190/1024 [MB] (24 MBps) [2024-11-05T18:10:54.036Z] Copying: 213/1024 [MB] (22 MBps) [2024-11-05T18:10:55.416Z] Copying: 236/1024 [MB] (23 MBps) [2024-11-05T18:10:56.354Z] Copying: 258/1024 [MB] (22 MBps) [2024-11-05T18:10:57.292Z] Copying: 281/1024 [MB] (22 MBps) [2024-11-05T18:10:58.230Z] Copying: 305/1024 [MB] (23 MBps) [2024-11-05T18:10:59.168Z] Copying: 327/1024 [MB] (22 MBps) [2024-11-05T18:11:00.106Z] Copying: 349/1024 [MB] (22 MBps) [2024-11-05T18:11:01.042Z] Copying: 371/1024 [MB] (21 MBps) [2024-11-05T18:11:02.422Z] Copying: 395/1024 [MB] (23 MBps) [2024-11-05T18:11:03.360Z] Copying: 419/1024 [MB] (24 MBps) [2024-11-05T18:11:04.298Z] Copying: 443/1024 [MB] (24 MBps) [2024-11-05T18:11:05.236Z] Copying: 467/1024 [MB] (23 MBps) [2024-11-05T18:11:06.175Z] Copying: 489/1024 [MB] (22 MBps) [2024-11-05T18:11:07.114Z] Copying: 512/1024 [MB] (22 MBps) [2024-11-05T18:11:08.052Z] Copying: 534/1024 [MB] (22 MBps) [2024-11-05T18:11:09.450Z] Copying: 558/1024 [MB] (23 MBps) [2024-11-05T18:11:10.034Z] Copying: 582/1024 [MB] (24 MBps) [2024-11-05T18:11:11.412Z] Copying: 605/1024 [MB] (22 MBps) [2024-11-05T18:11:12.350Z] Copying: 627/1024 [MB] (22 MBps) [2024-11-05T18:11:13.286Z] Copying: 649/1024 [MB] (22 MBps) [2024-11-05T18:11:14.225Z] Copying: 672/1024 [MB] (22 MBps) [2024-11-05T18:11:15.163Z] Copying: 694/1024 [MB] (21 MBps) [2024-11-05T18:11:16.101Z] Copying: 715/1024 [MB] (21 MBps) [2024-11-05T18:11:17.040Z] Copying: 739/1024 [MB] (23 MBps) [2024-11-05T18:11:18.418Z] Copying: 763/1024 [MB] (23 MBps) [2024-11-05T18:11:19.356Z] Copying: 787/1024 [MB] (24 MBps) [2024-11-05T18:11:20.294Z] Copying: 811/1024 [MB] (23 MBps) [2024-11-05T18:11:21.232Z] Copying: 833/1024 [MB] (22 MBps) [2024-11-05T18:11:22.170Z] Copying: 856/1024 [MB] (22 MBps) [2024-11-05T18:11:23.107Z] Copying: 878/1024 [MB] (22 MBps) [2024-11-05T18:11:24.045Z] Copying: 901/1024 [MB] (22 MBps) [2024-11-05T18:11:25.424Z] Copying: 923/1024 [MB] (22 MBps) [2024-11-05T18:11:25.993Z] Copying: 946/1024 [MB] (22 MBps) [2024-11-05T18:11:27.429Z] Copying: 969/1024 [MB] (23 MBps) [2024-11-05T18:11:27.996Z] Copying: 992/1024 [MB] (23 MBps) [2024-11-05T18:11:28.564Z] Copying: 1014/1024 [MB] (21 MBps) [2024-11-05T18:11:28.564Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-05 18:11:28.409853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.241 [2024-11-05 18:11:28.409995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:59.241 [2024-11-05 18:11:28.410086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:59.241 [2024-11-05 18:11:28.410125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.241 [2024-11-05 18:11:28.410178] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:59.241 [2024-11-05 18:11:28.414358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.241 [2024-11-05 18:11:28.414533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:59.241 [2024-11-05 18:11:28.414663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.129 ms 00:21:59.241 [2024-11-05 18:11:28.414701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.241 [2024-11-05 18:11:28.416673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.241 [2024-11-05 18:11:28.416811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:59.241 [2024-11-05 18:11:28.416892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.908 ms 00:21:59.241 [2024-11-05 18:11:28.416929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.241 [2024-11-05 18:11:28.434302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.241 [2024-11-05 18:11:28.434460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:59.241 [2024-11-05 18:11:28.434554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.358 ms 00:21:59.241 [2024-11-05 18:11:28.434593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.241 [2024-11-05 18:11:28.439381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.241 [2024-11-05 18:11:28.439536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:59.241 [2024-11-05 18:11:28.439657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.739 ms 00:21:59.241 [2024-11-05 18:11:28.439694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.241 [2024-11-05 18:11:28.473312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.241 [2024-11-05 18:11:28.473472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:59.241 [2024-11-05 18:11:28.473579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.590 ms 00:21:59.241 [2024-11-05 18:11:28.473615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.241 [2024-11-05 18:11:28.493758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.241 [2024-11-05 18:11:28.493889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:59.241 [2024-11-05 18:11:28.494015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.102 ms 00:21:59.241 [2024-11-05 18:11:28.494052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.241 [2024-11-05 18:11:28.494184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.241 [2024-11-05 18:11:28.494224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:59.241 [2024-11-05 18:11:28.494319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:59.241 [2024-11-05 18:11:28.494353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.241 [2024-11-05 18:11:28.528584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.241 [2024-11-05 18:11:28.528709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:59.241 [2024-11-05 18:11:28.528803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.246 ms 00:21:59.241 [2024-11-05 18:11:28.528838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.241 [2024-11-05 18:11:28.562349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.241 [2024-11-05 18:11:28.562526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:59.241 [2024-11-05 18:11:28.562688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.463 ms 00:21:59.241 [2024-11-05 18:11:28.562725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.502 [2024-11-05 18:11:28.596153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.502 [2024-11-05 18:11:28.596301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:59.502 [2024-11-05 18:11:28.596465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.424 ms 00:21:59.502 [2024-11-05 18:11:28.596503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.502 [2024-11-05 18:11:28.631706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.502 [2024-11-05 18:11:28.631860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:59.502 [2024-11-05 18:11:28.631990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.153 ms 00:21:59.502 [2024-11-05 18:11:28.632028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.502 [2024-11-05 18:11:28.632093] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:59.502 [2024-11-05 18:11:28.632137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.632232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.632285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.632335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.632446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.632546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.632600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.632683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.632733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.632817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.632867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.632951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.633036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.633089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.633168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.633218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.633301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.633354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.633402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.633425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.633436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.633448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.633459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.633470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:59.502 [2024-11-05 18:11:28.633481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.633999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:59.503 [2024-11-05 18:11:28.634320] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:59.503 [2024-11-05 18:11:28.634342] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f1fabc7-dfe3-4e4a-be3d-af24c10698b1 00:21:59.503 [2024-11-05 18:11:28.634352] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:59.503 [2024-11-05 18:11:28.634368] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:59.503 [2024-11-05 18:11:28.634378] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:59.503 [2024-11-05 18:11:28.634388] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:59.503 [2024-11-05 18:11:28.634398] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:59.503 [2024-11-05 18:11:28.634416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:59.503 [2024-11-05 18:11:28.634427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:59.503 [2024-11-05 18:11:28.634450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:59.503 [2024-11-05 18:11:28.634460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:59.503 [2024-11-05 18:11:28.634470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.504 [2024-11-05 18:11:28.634481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:59.504 [2024-11-05 18:11:28.634491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.381 ms 00:21:59.504 [2024-11-05 18:11:28.634501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.504 [2024-11-05 18:11:28.653724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.504 [2024-11-05 18:11:28.653758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:59.504 [2024-11-05 18:11:28.653770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.217 ms 00:21:59.504 [2024-11-05 18:11:28.653780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.504 [2024-11-05 18:11:28.654307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.504 [2024-11-05 18:11:28.654320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:59.504 [2024-11-05 18:11:28.654329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:21:59.504 [2024-11-05 18:11:28.654339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.504 [2024-11-05 18:11:28.702445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.504 [2024-11-05 18:11:28.702482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:59.504 [2024-11-05 18:11:28.702495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.504 [2024-11-05 18:11:28.702504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.504 [2024-11-05 18:11:28.702552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.504 [2024-11-05 18:11:28.702562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:59.504 [2024-11-05 18:11:28.702572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.504 [2024-11-05 18:11:28.702581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.504 [2024-11-05 18:11:28.702648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.504 [2024-11-05 18:11:28.702661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:59.504 [2024-11-05 18:11:28.702671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.504 [2024-11-05 18:11:28.702680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.504 [2024-11-05 18:11:28.702696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.504 [2024-11-05 18:11:28.702706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:59.504 [2024-11-05 18:11:28.702715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.504 [2024-11-05 18:11:28.702725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.504 [2024-11-05 18:11:28.819003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.504 [2024-11-05 18:11:28.819054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:59.504 [2024-11-05 18:11:28.819068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.504 [2024-11-05 18:11:28.819078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.763 [2024-11-05 18:11:28.913097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.763 [2024-11-05 18:11:28.913342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:59.763 [2024-11-05 18:11:28.913363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.763 [2024-11-05 18:11:28.913374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.763 [2024-11-05 18:11:28.913483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.763 [2024-11-05 18:11:28.913503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:59.763 [2024-11-05 18:11:28.913514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.763 [2024-11-05 18:11:28.913526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.763 [2024-11-05 18:11:28.913565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.764 [2024-11-05 18:11:28.913576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:59.764 [2024-11-05 18:11:28.913587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.764 [2024-11-05 18:11:28.913597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.764 [2024-11-05 18:11:28.913711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.764 [2024-11-05 18:11:28.913730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:59.764 [2024-11-05 18:11:28.913741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.764 [2024-11-05 18:11:28.913751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.764 [2024-11-05 18:11:28.913786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.764 [2024-11-05 18:11:28.913799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:59.764 [2024-11-05 18:11:28.913809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.764 [2024-11-05 18:11:28.913820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.764 [2024-11-05 18:11:28.913856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.764 [2024-11-05 18:11:28.913867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:59.764 [2024-11-05 18:11:28.913881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.764 [2024-11-05 18:11:28.913893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.764 [2024-11-05 18:11:28.913933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.764 [2024-11-05 18:11:28.913945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:59.764 [2024-11-05 18:11:28.913955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.764 [2024-11-05 18:11:28.913966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.764 [2024-11-05 18:11:28.914092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 505.018 ms, result 0 00:22:00.701 00:22:00.701 00:22:00.701 18:11:29 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:00.701 [2024-11-05 18:11:30.026138] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:22:00.960 [2024-11-05 18:11:30.026614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76691 ] 00:22:00.960 [2024-11-05 18:11:30.203521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.219 [2024-11-05 18:11:30.310288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.478 [2024-11-05 18:11:30.633764] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:01.478 [2024-11-05 18:11:30.633830] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:01.478 [2024-11-05 18:11:30.794112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.478 [2024-11-05 18:11:30.794348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:01.478 [2024-11-05 18:11:30.794380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:01.478 [2024-11-05 18:11:30.794391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.478 [2024-11-05 18:11:30.794467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.478 [2024-11-05 18:11:30.794481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:01.478 [2024-11-05 18:11:30.794496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:01.478 [2024-11-05 18:11:30.794507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.478 [2024-11-05 18:11:30.794531] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:01.478 [2024-11-05 18:11:30.795464] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:01.478 [2024-11-05 18:11:30.795492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.478 [2024-11-05 18:11:30.795505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:01.478 [2024-11-05 18:11:30.795517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:22:01.478 [2024-11-05 18:11:30.795527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.479 [2024-11-05 18:11:30.796993] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:01.739 [2024-11-05 18:11:30.814711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.739 [2024-11-05 18:11:30.814750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:01.739 [2024-11-05 18:11:30.814764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.747 ms 00:22:01.739 [2024-11-05 18:11:30.814774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.739 [2024-11-05 18:11:30.814841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.739 [2024-11-05 18:11:30.814853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:01.739 [2024-11-05 18:11:30.814864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:01.739 [2024-11-05 18:11:30.814874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.739 [2024-11-05 18:11:30.821868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.739 [2024-11-05 18:11:30.822008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:01.739 [2024-11-05 18:11:30.822148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.939 ms 00:22:01.739 [2024-11-05 18:11:30.822187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.739 [2024-11-05 18:11:30.822294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.739 [2024-11-05 18:11:30.822330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:01.739 [2024-11-05 18:11:30.822430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:01.739 [2024-11-05 18:11:30.822471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.739 [2024-11-05 18:11:30.822538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.739 [2024-11-05 18:11:30.822573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:01.739 [2024-11-05 18:11:30.822605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:01.739 [2024-11-05 18:11:30.822691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.740 [2024-11-05 18:11:30.822746] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:01.740 [2024-11-05 18:11:30.827507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.740 [2024-11-05 18:11:30.827655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:01.740 [2024-11-05 18:11:30.827675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.774 ms 00:22:01.740 [2024-11-05 18:11:30.827691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.740 [2024-11-05 18:11:30.827729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.740 [2024-11-05 18:11:30.827740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:01.740 [2024-11-05 18:11:30.827751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:01.740 [2024-11-05 18:11:30.827761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.740 [2024-11-05 18:11:30.827814] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:01.740 [2024-11-05 18:11:30.827837] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:01.740 [2024-11-05 18:11:30.827871] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:01.740 [2024-11-05 18:11:30.827892] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:01.740 [2024-11-05 18:11:30.827979] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:01.740 [2024-11-05 18:11:30.827994] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:01.740 [2024-11-05 18:11:30.828007] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:01.740 [2024-11-05 18:11:30.828020] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:01.740 [2024-11-05 18:11:30.828032] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:01.740 [2024-11-05 18:11:30.828044] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:01.740 [2024-11-05 18:11:30.828054] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:01.740 [2024-11-05 18:11:30.828064] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:01.740 [2024-11-05 18:11:30.828076] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:01.740 [2024-11-05 18:11:30.828089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.740 [2024-11-05 18:11:30.828100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:01.740 [2024-11-05 18:11:30.828110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:22:01.740 [2024-11-05 18:11:30.828121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.740 [2024-11-05 18:11:30.828190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.740 [2024-11-05 18:11:30.828202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:01.740 [2024-11-05 18:11:30.828213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:01.740 [2024-11-05 18:11:30.828223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.740 [2024-11-05 18:11:30.828313] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:01.740 [2024-11-05 18:11:30.828332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:01.740 [2024-11-05 18:11:30.828343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:01.740 [2024-11-05 18:11:30.828354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.740 [2024-11-05 18:11:30.828364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:01.740 [2024-11-05 18:11:30.828374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:01.740 [2024-11-05 18:11:30.828383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:01.740 [2024-11-05 18:11:30.828394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:01.740 [2024-11-05 18:11:30.828404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:01.740 [2024-11-05 18:11:30.828437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:01.740 [2024-11-05 18:11:30.828449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:01.740 [2024-11-05 18:11:30.828460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:01.740 [2024-11-05 18:11:30.828469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:01.740 [2024-11-05 18:11:30.828479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:01.740 [2024-11-05 18:11:30.828489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:01.740 [2024-11-05 18:11:30.828507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.740 [2024-11-05 18:11:30.828516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:01.740 [2024-11-05 18:11:30.828527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:01.740 [2024-11-05 18:11:30.828537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.740 [2024-11-05 18:11:30.828546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:01.740 [2024-11-05 18:11:30.828556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:01.740 [2024-11-05 18:11:30.828565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.740 [2024-11-05 18:11:30.828574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:01.740 [2024-11-05 18:11:30.828584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:01.740 [2024-11-05 18:11:30.828593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.740 [2024-11-05 18:11:30.828602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:01.740 [2024-11-05 18:11:30.828611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:01.740 [2024-11-05 18:11:30.828620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.740 [2024-11-05 18:11:30.828629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:01.740 [2024-11-05 18:11:30.828639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:01.740 [2024-11-05 18:11:30.828649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:01.740 [2024-11-05 18:11:30.828658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:01.740 [2024-11-05 18:11:30.828667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:01.740 [2024-11-05 18:11:30.828676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:01.740 [2024-11-05 18:11:30.828685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:01.740 [2024-11-05 18:11:30.828698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:01.740 [2024-11-05 18:11:30.828706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:01.740 [2024-11-05 18:11:30.828715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:01.740 [2024-11-05 18:11:30.828725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:01.740 [2024-11-05 18:11:30.828734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.740 [2024-11-05 18:11:30.828743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:01.740 [2024-11-05 18:11:30.828753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:01.740 [2024-11-05 18:11:30.828763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.740 [2024-11-05 18:11:30.828772] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:01.740 [2024-11-05 18:11:30.828782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:01.740 [2024-11-05 18:11:30.828792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:01.740 [2024-11-05 18:11:30.828802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:01.740 [2024-11-05 18:11:30.828811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:01.740 [2024-11-05 18:11:30.828821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:01.740 [2024-11-05 18:11:30.828830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:01.740 [2024-11-05 18:11:30.828840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:01.740 [2024-11-05 18:11:30.828849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:01.740 [2024-11-05 18:11:30.828858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:01.740 [2024-11-05 18:11:30.828869] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:01.740 [2024-11-05 18:11:30.828882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:01.740 [2024-11-05 18:11:30.828893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:01.740 [2024-11-05 18:11:30.828904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:01.740 [2024-11-05 18:11:30.828914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:01.740 [2024-11-05 18:11:30.828924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:01.740 [2024-11-05 18:11:30.828934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:01.740 [2024-11-05 18:11:30.828944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:01.740 [2024-11-05 18:11:30.828955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:01.740 [2024-11-05 18:11:30.828965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:01.740 [2024-11-05 18:11:30.828975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:01.740 [2024-11-05 18:11:30.828986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:01.740 [2024-11-05 18:11:30.828996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:01.740 [2024-11-05 18:11:30.829006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:01.741 [2024-11-05 18:11:30.829017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:01.741 [2024-11-05 18:11:30.829028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:01.741 [2024-11-05 18:11:30.829037] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:01.741 [2024-11-05 18:11:30.829052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:01.741 [2024-11-05 18:11:30.829063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:01.741 [2024-11-05 18:11:30.829073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:01.741 [2024-11-05 18:11:30.829094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:01.741 [2024-11-05 18:11:30.829107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:01.741 [2024-11-05 18:11:30.829119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.741 [2024-11-05 18:11:30.829130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:01.741 [2024-11-05 18:11:30.829140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:22:01.741 [2024-11-05 18:11:30.829149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.741 [2024-11-05 18:11:30.865727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.741 [2024-11-05 18:11:30.865763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:01.741 [2024-11-05 18:11:30.865776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.594 ms 00:22:01.741 [2024-11-05 18:11:30.865787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.741 [2024-11-05 18:11:30.865860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.741 [2024-11-05 18:11:30.865872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:01.741 [2024-11-05 18:11:30.865883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:22:01.741 [2024-11-05 18:11:30.865892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.741 [2024-11-05 18:11:30.939370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.741 [2024-11-05 18:11:30.939417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:01.741 [2024-11-05 18:11:30.939432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.545 ms 00:22:01.741 [2024-11-05 18:11:30.939442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.741 [2024-11-05 18:11:30.939479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.741 [2024-11-05 18:11:30.939491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:01.741 [2024-11-05 18:11:30.939502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:01.741 [2024-11-05 18:11:30.939515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.741 [2024-11-05 18:11:30.940022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.741 [2024-11-05 18:11:30.940044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:01.741 [2024-11-05 18:11:30.940056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:22:01.741 [2024-11-05 18:11:30.940065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.741 [2024-11-05 18:11:30.940172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.741 [2024-11-05 18:11:30.940186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:01.741 [2024-11-05 18:11:30.940196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:22:01.741 [2024-11-05 18:11:30.940212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.741 [2024-11-05 18:11:30.959252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.741 [2024-11-05 18:11:30.959496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:01.741 [2024-11-05 18:11:30.959524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.051 ms 00:22:01.741 [2024-11-05 18:11:30.959536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.741 [2024-11-05 18:11:30.976989] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:01.741 [2024-11-05 18:11:30.977029] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:01.741 [2024-11-05 18:11:30.977045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.741 [2024-11-05 18:11:30.977055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:01.741 [2024-11-05 18:11:30.977067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.435 ms 00:22:01.741 [2024-11-05 18:11:30.977076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.741 [2024-11-05 18:11:31.005102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.741 [2024-11-05 18:11:31.005148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:01.741 [2024-11-05 18:11:31.005162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.028 ms 00:22:01.741 [2024-11-05 18:11:31.005173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.741 [2024-11-05 18:11:31.022175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.741 [2024-11-05 18:11:31.022213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:01.741 [2024-11-05 18:11:31.022227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.988 ms 00:22:01.741 [2024-11-05 18:11:31.022237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.741 [2024-11-05 18:11:31.038965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.741 [2024-11-05 18:11:31.039002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:01.741 [2024-11-05 18:11:31.039016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.718 ms 00:22:01.741 [2024-11-05 18:11:31.039026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.741 [2024-11-05 18:11:31.039750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.741 [2024-11-05 18:11:31.039771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:01.741 [2024-11-05 18:11:31.039782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:22:01.741 [2024-11-05 18:11:31.039795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.001 [2024-11-05 18:11:31.120620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.001 [2024-11-05 18:11:31.120867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:02.001 [2024-11-05 18:11:31.120898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.933 ms 00:22:02.001 [2024-11-05 18:11:31.120909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.001 [2024-11-05 18:11:31.130833] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:02.001 [2024-11-05 18:11:31.132979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.001 [2024-11-05 18:11:31.133010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:02.001 [2024-11-05 18:11:31.133023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.955 ms 00:22:02.001 [2024-11-05 18:11:31.133034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.001 [2024-11-05 18:11:31.133104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.001 [2024-11-05 18:11:31.133117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:02.001 [2024-11-05 18:11:31.133128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:02.001 [2024-11-05 18:11:31.133142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.001 [2024-11-05 18:11:31.133210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.001 [2024-11-05 18:11:31.133223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:02.001 [2024-11-05 18:11:31.133233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:02.001 [2024-11-05 18:11:31.133243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.001 [2024-11-05 18:11:31.133263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.001 [2024-11-05 18:11:31.133274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:02.001 [2024-11-05 18:11:31.133284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:02.001 [2024-11-05 18:11:31.133294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.001 [2024-11-05 18:11:31.133327] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:02.001 [2024-11-05 18:11:31.133342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.001 [2024-11-05 18:11:31.133352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:02.001 [2024-11-05 18:11:31.133362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:02.001 [2024-11-05 18:11:31.133371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.001 [2024-11-05 18:11:31.167405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.001 [2024-11-05 18:11:31.167583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:02.001 [2024-11-05 18:11:31.167621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.070 ms 00:22:02.001 [2024-11-05 18:11:31.167639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.001 [2024-11-05 18:11:31.167711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.001 [2024-11-05 18:11:31.167724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:02.001 [2024-11-05 18:11:31.167736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:02.001 [2024-11-05 18:11:31.167747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.001 [2024-11-05 18:11:31.168823] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.864 ms, result 0 00:22:03.381  [2024-11-05T18:11:33.642Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-05T18:11:34.580Z] Copying: 47/1024 [MB] (23 MBps) [2024-11-05T18:11:35.518Z] Copying: 71/1024 [MB] (24 MBps) [2024-11-05T18:11:36.455Z] Copying: 96/1024 [MB] (24 MBps) [2024-11-05T18:11:37.392Z] Copying: 120/1024 [MB] (24 MBps) [2024-11-05T18:11:38.770Z] Copying: 144/1024 [MB] (24 MBps) [2024-11-05T18:11:39.725Z] Copying: 169/1024 [MB] (24 MBps) [2024-11-05T18:11:40.663Z] Copying: 194/1024 [MB] (25 MBps) [2024-11-05T18:11:41.603Z] Copying: 219/1024 [MB] (25 MBps) [2024-11-05T18:11:42.540Z] Copying: 244/1024 [MB] (24 MBps) [2024-11-05T18:11:43.478Z] Copying: 268/1024 [MB] (24 MBps) [2024-11-05T18:11:44.416Z] Copying: 292/1024 [MB] (23 MBps) [2024-11-05T18:11:45.354Z] Copying: 316/1024 [MB] (23 MBps) [2024-11-05T18:11:46.733Z] Copying: 341/1024 [MB] (25 MBps) [2024-11-05T18:11:47.671Z] Copying: 367/1024 [MB] (25 MBps) [2024-11-05T18:11:48.609Z] Copying: 390/1024 [MB] (23 MBps) [2024-11-05T18:11:49.545Z] Copying: 415/1024 [MB] (24 MBps) [2024-11-05T18:11:50.483Z] Copying: 439/1024 [MB] (24 MBps) [2024-11-05T18:11:51.421Z] Copying: 462/1024 [MB] (23 MBps) [2024-11-05T18:11:52.359Z] Copying: 486/1024 [MB] (23 MBps) [2024-11-05T18:11:53.738Z] Copying: 509/1024 [MB] (23 MBps) [2024-11-05T18:11:54.712Z] Copying: 534/1024 [MB] (24 MBps) [2024-11-05T18:11:55.650Z] Copying: 558/1024 [MB] (23 MBps) [2024-11-05T18:11:56.588Z] Copying: 581/1024 [MB] (23 MBps) [2024-11-05T18:11:57.526Z] Copying: 605/1024 [MB] (23 MBps) [2024-11-05T18:11:58.464Z] Copying: 629/1024 [MB] (24 MBps) [2024-11-05T18:11:59.402Z] Copying: 653/1024 [MB] (23 MBps) [2024-11-05T18:12:00.340Z] Copying: 677/1024 [MB] (23 MBps) [2024-11-05T18:12:01.720Z] Copying: 700/1024 [MB] (23 MBps) [2024-11-05T18:12:02.658Z] Copying: 724/1024 [MB] (24 MBps) [2024-11-05T18:12:03.596Z] Copying: 748/1024 [MB] (24 MBps) [2024-11-05T18:12:04.533Z] Copying: 772/1024 [MB] (23 MBps) [2024-11-05T18:12:05.471Z] Copying: 796/1024 [MB] (24 MBps) [2024-11-05T18:12:06.409Z] Copying: 820/1024 [MB] (24 MBps) [2024-11-05T18:12:07.348Z] Copying: 845/1024 [MB] (24 MBps) [2024-11-05T18:12:08.728Z] Copying: 869/1024 [MB] (24 MBps) [2024-11-05T18:12:09.666Z] Copying: 893/1024 [MB] (24 MBps) [2024-11-05T18:12:10.603Z] Copying: 917/1024 [MB] (23 MBps) [2024-11-05T18:12:11.541Z] Copying: 941/1024 [MB] (23 MBps) [2024-11-05T18:12:12.488Z] Copying: 965/1024 [MB] (24 MBps) [2024-11-05T18:12:13.425Z] Copying: 989/1024 [MB] (23 MBps) [2024-11-05T18:12:13.993Z] Copying: 1012/1024 [MB] (22 MBps) [2024-11-05T18:12:13.993Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-05 18:12:13.955797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.670 [2024-11-05 18:12:13.955893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:44.670 [2024-11-05 18:12:13.955923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:44.670 [2024-11-05 18:12:13.955943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.670 [2024-11-05 18:12:13.955984] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:44.670 [2024-11-05 18:12:13.963840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.670 [2024-11-05 18:12:13.963901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:44.670 [2024-11-05 18:12:13.963931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.837 ms 00:22:44.670 [2024-11-05 18:12:13.963947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.670 [2024-11-05 18:12:13.964271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.670 [2024-11-05 18:12:13.964294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:44.670 [2024-11-05 18:12:13.964312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:22:44.670 [2024-11-05 18:12:13.964335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.670 [2024-11-05 18:12:13.968800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.670 [2024-11-05 18:12:13.968837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:44.670 [2024-11-05 18:12:13.968855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.446 ms 00:22:44.670 [2024-11-05 18:12:13.968871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.670 [2024-11-05 18:12:13.975783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.670 [2024-11-05 18:12:13.975827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:44.670 [2024-11-05 18:12:13.975841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.888 ms 00:22:44.670 [2024-11-05 18:12:13.975853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.931 [2024-11-05 18:12:14.012344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.931 [2024-11-05 18:12:14.012387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:44.931 [2024-11-05 18:12:14.012402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.444 ms 00:22:44.931 [2024-11-05 18:12:14.012424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.931 [2024-11-05 18:12:14.032984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.931 [2024-11-05 18:12:14.033178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:44.931 [2024-11-05 18:12:14.033202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.549 ms 00:22:44.931 [2024-11-05 18:12:14.033214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.931 [2024-11-05 18:12:14.033361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.931 [2024-11-05 18:12:14.033383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:44.931 [2024-11-05 18:12:14.033395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:22:44.931 [2024-11-05 18:12:14.033430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.931 [2024-11-05 18:12:14.067718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.931 [2024-11-05 18:12:14.067757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:44.931 [2024-11-05 18:12:14.067770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.326 ms 00:22:44.931 [2024-11-05 18:12:14.067780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.931 [2024-11-05 18:12:14.103845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.931 [2024-11-05 18:12:14.103892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:44.931 [2024-11-05 18:12:14.103905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.083 ms 00:22:44.931 [2024-11-05 18:12:14.103914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.931 [2024-11-05 18:12:14.137846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.931 [2024-11-05 18:12:14.137882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:44.931 [2024-11-05 18:12:14.137895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.949 ms 00:22:44.931 [2024-11-05 18:12:14.137905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.931 [2024-11-05 18:12:14.170732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.931 [2024-11-05 18:12:14.170770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:44.931 [2024-11-05 18:12:14.170782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.807 ms 00:22:44.931 [2024-11-05 18:12:14.170791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.931 [2024-11-05 18:12:14.170827] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:44.931 [2024-11-05 18:12:14.170842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.170861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.170872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.170882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.170893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.170904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.170914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.170924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.170935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.170945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.170955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.170965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.170974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.170984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.170994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:44.931 [2024-11-05 18:12:14.171272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:44.932 [2024-11-05 18:12:14.171895] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:44.932 [2024-11-05 18:12:14.171909] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f1fabc7-dfe3-4e4a-be3d-af24c10698b1 00:22:44.932 [2024-11-05 18:12:14.171919] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:44.932 [2024-11-05 18:12:14.171929] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:44.932 [2024-11-05 18:12:14.171938] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:44.932 [2024-11-05 18:12:14.171948] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:44.932 [2024-11-05 18:12:14.171957] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:44.932 [2024-11-05 18:12:14.171967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:44.932 [2024-11-05 18:12:14.171986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:44.932 [2024-11-05 18:12:14.171995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:44.932 [2024-11-05 18:12:14.172003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:44.932 [2024-11-05 18:12:14.172012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.932 [2024-11-05 18:12:14.172022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:44.932 [2024-11-05 18:12:14.172032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.188 ms 00:22:44.932 [2024-11-05 18:12:14.172041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.932 [2024-11-05 18:12:14.190957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.932 [2024-11-05 18:12:14.190992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:44.932 [2024-11-05 18:12:14.191005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.898 ms 00:22:44.932 [2024-11-05 18:12:14.191014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.932 [2024-11-05 18:12:14.191571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.932 [2024-11-05 18:12:14.191584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:44.932 [2024-11-05 18:12:14.191594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:22:44.932 [2024-11-05 18:12:14.191609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.932 [2024-11-05 18:12:14.239247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.932 [2024-11-05 18:12:14.239282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:44.932 [2024-11-05 18:12:14.239295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.932 [2024-11-05 18:12:14.239305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.932 [2024-11-05 18:12:14.239355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.932 [2024-11-05 18:12:14.239366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:44.932 [2024-11-05 18:12:14.239376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.932 [2024-11-05 18:12:14.239391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.932 [2024-11-05 18:12:14.239464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.932 [2024-11-05 18:12:14.239478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:44.932 [2024-11-05 18:12:14.239488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.932 [2024-11-05 18:12:14.239498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.932 [2024-11-05 18:12:14.239514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.932 [2024-11-05 18:12:14.239524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:44.932 [2024-11-05 18:12:14.239534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.932 [2024-11-05 18:12:14.239544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.192 [2024-11-05 18:12:14.356221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.192 [2024-11-05 18:12:14.356285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:45.192 [2024-11-05 18:12:14.356298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.192 [2024-11-05 18:12:14.356309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.192 [2024-11-05 18:12:14.452085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.192 [2024-11-05 18:12:14.452264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:45.192 [2024-11-05 18:12:14.452302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.192 [2024-11-05 18:12:14.452313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.192 [2024-11-05 18:12:14.452404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.192 [2024-11-05 18:12:14.452417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:45.192 [2024-11-05 18:12:14.452448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.192 [2024-11-05 18:12:14.452458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.192 [2024-11-05 18:12:14.452498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.192 [2024-11-05 18:12:14.452509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:45.192 [2024-11-05 18:12:14.452520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.192 [2024-11-05 18:12:14.452530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.192 [2024-11-05 18:12:14.452656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.192 [2024-11-05 18:12:14.452672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:45.192 [2024-11-05 18:12:14.452683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.192 [2024-11-05 18:12:14.452694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.192 [2024-11-05 18:12:14.452731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.192 [2024-11-05 18:12:14.452744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:45.192 [2024-11-05 18:12:14.452755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.192 [2024-11-05 18:12:14.452765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.193 [2024-11-05 18:12:14.452825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.193 [2024-11-05 18:12:14.452844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:45.193 [2024-11-05 18:12:14.452856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.193 [2024-11-05 18:12:14.452866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.193 [2024-11-05 18:12:14.452916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:45.193 [2024-11-05 18:12:14.452929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:45.193 [2024-11-05 18:12:14.452942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:45.193 [2024-11-05 18:12:14.452953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.193 [2024-11-05 18:12:14.453072] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 498.065 ms, result 0 00:22:46.130 00:22:46.130 00:22:46.130 18:12:15 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:48.037 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:48.037 18:12:17 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:48.037 [2024-11-05 18:12:17.195652] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:22:48.037 [2024-11-05 18:12:17.195973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77173 ] 00:22:48.296 [2024-11-05 18:12:17.377993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.296 [2024-11-05 18:12:17.483343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.556 [2024-11-05 18:12:17.836824] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:48.556 [2024-11-05 18:12:17.836892] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:48.816 [2024-11-05 18:12:17.997380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.816 [2024-11-05 18:12:17.997599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:48.816 [2024-11-05 18:12:17.997648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:48.816 [2024-11-05 18:12:17.997660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.816 [2024-11-05 18:12:17.997728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.816 [2024-11-05 18:12:17.997742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:48.817 [2024-11-05 18:12:17.997757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:48.817 [2024-11-05 18:12:17.997766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.817 [2024-11-05 18:12:17.997790] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:48.817 [2024-11-05 18:12:17.998828] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:48.817 [2024-11-05 18:12:17.998860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.817 [2024-11-05 18:12:17.998871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:48.817 [2024-11-05 18:12:17.998882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:22:48.817 [2024-11-05 18:12:17.998892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.817 [2024-11-05 18:12:18.000317] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:48.817 [2024-11-05 18:12:18.018955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.817 [2024-11-05 18:12:18.018995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:48.817 [2024-11-05 18:12:18.019010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.669 ms 00:22:48.817 [2024-11-05 18:12:18.019020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.817 [2024-11-05 18:12:18.019084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.817 [2024-11-05 18:12:18.019097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:48.817 [2024-11-05 18:12:18.019108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:48.817 [2024-11-05 18:12:18.019119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.817 [2024-11-05 18:12:18.025734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.817 [2024-11-05 18:12:18.025763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:48.817 [2024-11-05 18:12:18.025774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.556 ms 00:22:48.817 [2024-11-05 18:12:18.025784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.817 [2024-11-05 18:12:18.025859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.817 [2024-11-05 18:12:18.025873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:48.817 [2024-11-05 18:12:18.025883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:48.817 [2024-11-05 18:12:18.025893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.817 [2024-11-05 18:12:18.025931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.817 [2024-11-05 18:12:18.025943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:48.817 [2024-11-05 18:12:18.025954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:48.817 [2024-11-05 18:12:18.025963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.817 [2024-11-05 18:12:18.025985] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:48.817 [2024-11-05 18:12:18.030684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.817 [2024-11-05 18:12:18.030718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:48.817 [2024-11-05 18:12:18.030731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.712 ms 00:22:48.817 [2024-11-05 18:12:18.030744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.817 [2024-11-05 18:12:18.030773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.817 [2024-11-05 18:12:18.030784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:48.817 [2024-11-05 18:12:18.030794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:48.817 [2024-11-05 18:12:18.030805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.817 [2024-11-05 18:12:18.030858] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:48.817 [2024-11-05 18:12:18.030881] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:48.817 [2024-11-05 18:12:18.030917] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:48.817 [2024-11-05 18:12:18.030938] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:48.817 [2024-11-05 18:12:18.031027] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:48.817 [2024-11-05 18:12:18.031041] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:48.817 [2024-11-05 18:12:18.031054] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:48.817 [2024-11-05 18:12:18.031067] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:48.817 [2024-11-05 18:12:18.031079] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:48.817 [2024-11-05 18:12:18.031091] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:48.817 [2024-11-05 18:12:18.031101] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:48.817 [2024-11-05 18:12:18.031111] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:48.817 [2024-11-05 18:12:18.031122] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:48.817 [2024-11-05 18:12:18.031137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.817 [2024-11-05 18:12:18.031147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:48.817 [2024-11-05 18:12:18.031158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:22:48.817 [2024-11-05 18:12:18.031168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.817 [2024-11-05 18:12:18.031237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.817 [2024-11-05 18:12:18.031249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:48.817 [2024-11-05 18:12:18.031259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:48.817 [2024-11-05 18:12:18.031269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.817 [2024-11-05 18:12:18.031360] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:48.817 [2024-11-05 18:12:18.031378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:48.817 [2024-11-05 18:12:18.031389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:48.817 [2024-11-05 18:12:18.031400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.817 [2024-11-05 18:12:18.031424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:48.817 [2024-11-05 18:12:18.031433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:48.817 [2024-11-05 18:12:18.031443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:48.817 [2024-11-05 18:12:18.031454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:48.817 [2024-11-05 18:12:18.031464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:48.817 [2024-11-05 18:12:18.031473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:48.817 [2024-11-05 18:12:18.031484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:48.817 [2024-11-05 18:12:18.031493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:48.817 [2024-11-05 18:12:18.031502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:48.817 [2024-11-05 18:12:18.031511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:48.817 [2024-11-05 18:12:18.031521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:48.817 [2024-11-05 18:12:18.031539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.817 [2024-11-05 18:12:18.031549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:48.817 [2024-11-05 18:12:18.031558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:48.817 [2024-11-05 18:12:18.031567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.817 [2024-11-05 18:12:18.031577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:48.817 [2024-11-05 18:12:18.031586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:48.817 [2024-11-05 18:12:18.031595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:48.817 [2024-11-05 18:12:18.031604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:48.817 [2024-11-05 18:12:18.031613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:48.817 [2024-11-05 18:12:18.031623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:48.817 [2024-11-05 18:12:18.031632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:48.817 [2024-11-05 18:12:18.031641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:48.817 [2024-11-05 18:12:18.031650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:48.817 [2024-11-05 18:12:18.031659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:48.817 [2024-11-05 18:12:18.031668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:48.817 [2024-11-05 18:12:18.031677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:48.817 [2024-11-05 18:12:18.031686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:48.817 [2024-11-05 18:12:18.031696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:48.817 [2024-11-05 18:12:18.031705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:48.817 [2024-11-05 18:12:18.031714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:48.817 [2024-11-05 18:12:18.031723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:48.817 [2024-11-05 18:12:18.031732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:48.817 [2024-11-05 18:12:18.031741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:48.817 [2024-11-05 18:12:18.031750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:48.817 [2024-11-05 18:12:18.031758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.817 [2024-11-05 18:12:18.031767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:48.817 [2024-11-05 18:12:18.031776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:48.817 [2024-11-05 18:12:18.031789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.817 [2024-11-05 18:12:18.031798] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:48.817 [2024-11-05 18:12:18.031809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:48.817 [2024-11-05 18:12:18.031819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:48.817 [2024-11-05 18:12:18.031829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:48.818 [2024-11-05 18:12:18.031839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:48.818 [2024-11-05 18:12:18.031848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:48.818 [2024-11-05 18:12:18.031858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:48.818 [2024-11-05 18:12:18.031867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:48.818 [2024-11-05 18:12:18.031876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:48.818 [2024-11-05 18:12:18.031885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:48.818 [2024-11-05 18:12:18.031895] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:48.818 [2024-11-05 18:12:18.031908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:48.818 [2024-11-05 18:12:18.031919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:48.818 [2024-11-05 18:12:18.031929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:48.818 [2024-11-05 18:12:18.031939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:48.818 [2024-11-05 18:12:18.031949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:48.818 [2024-11-05 18:12:18.031960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:48.818 [2024-11-05 18:12:18.031970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:48.818 [2024-11-05 18:12:18.031980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:48.818 [2024-11-05 18:12:18.031991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:48.818 [2024-11-05 18:12:18.032002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:48.818 [2024-11-05 18:12:18.032012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:48.818 [2024-11-05 18:12:18.032022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:48.818 [2024-11-05 18:12:18.032032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:48.818 [2024-11-05 18:12:18.032042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:48.818 [2024-11-05 18:12:18.032052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:48.818 [2024-11-05 18:12:18.032062] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:48.818 [2024-11-05 18:12:18.032076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:48.818 [2024-11-05 18:12:18.032087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:48.818 [2024-11-05 18:12:18.032099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:48.818 [2024-11-05 18:12:18.032109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:48.818 [2024-11-05 18:12:18.032119] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:48.818 [2024-11-05 18:12:18.032130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.818 [2024-11-05 18:12:18.032141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:48.818 [2024-11-05 18:12:18.032151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:22:48.818 [2024-11-05 18:12:18.032161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.818 [2024-11-05 18:12:18.069922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.818 [2024-11-05 18:12:18.069958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:48.818 [2024-11-05 18:12:18.069971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.778 ms 00:22:48.818 [2024-11-05 18:12:18.069982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.818 [2024-11-05 18:12:18.070062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.818 [2024-11-05 18:12:18.070073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:48.818 [2024-11-05 18:12:18.070084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:48.818 [2024-11-05 18:12:18.070094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.818 [2024-11-05 18:12:18.133835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.818 [2024-11-05 18:12:18.133873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:48.818 [2024-11-05 18:12:18.133887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.789 ms 00:22:48.818 [2024-11-05 18:12:18.133897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.818 [2024-11-05 18:12:18.133936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.818 [2024-11-05 18:12:18.133947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:48.818 [2024-11-05 18:12:18.133958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:48.818 [2024-11-05 18:12:18.133971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.818 [2024-11-05 18:12:18.134469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.818 [2024-11-05 18:12:18.134485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:48.818 [2024-11-05 18:12:18.134496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:22:48.818 [2024-11-05 18:12:18.134505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.818 [2024-11-05 18:12:18.134614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.818 [2024-11-05 18:12:18.134628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:48.818 [2024-11-05 18:12:18.134638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:22:48.818 [2024-11-05 18:12:18.134653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.152197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.078 [2024-11-05 18:12:18.152230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:49.078 [2024-11-05 18:12:18.152246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.553 ms 00:22:49.078 [2024-11-05 18:12:18.152257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.170155] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:49.078 [2024-11-05 18:12:18.170195] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:49.078 [2024-11-05 18:12:18.170210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.078 [2024-11-05 18:12:18.170221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:49.078 [2024-11-05 18:12:18.170232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.889 ms 00:22:49.078 [2024-11-05 18:12:18.170241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.198872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.078 [2024-11-05 18:12:18.198920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:49.078 [2024-11-05 18:12:18.198949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.633 ms 00:22:49.078 [2024-11-05 18:12:18.198960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.217292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.078 [2024-11-05 18:12:18.217459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:49.078 [2024-11-05 18:12:18.217481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.318 ms 00:22:49.078 [2024-11-05 18:12:18.217492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.235275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.078 [2024-11-05 18:12:18.235310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:49.078 [2024-11-05 18:12:18.235322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.773 ms 00:22:49.078 [2024-11-05 18:12:18.235332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.236102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.078 [2024-11-05 18:12:18.236139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:49.078 [2024-11-05 18:12:18.236152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:22:49.078 [2024-11-05 18:12:18.236166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.316977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.078 [2024-11-05 18:12:18.317034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:49.078 [2024-11-05 18:12:18.317056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.919 ms 00:22:49.078 [2024-11-05 18:12:18.317066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.326996] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:49.078 [2024-11-05 18:12:18.329396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.078 [2024-11-05 18:12:18.329437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:49.078 [2024-11-05 18:12:18.329450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.307 ms 00:22:49.078 [2024-11-05 18:12:18.329460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.329534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.078 [2024-11-05 18:12:18.329548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:49.078 [2024-11-05 18:12:18.329559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:49.078 [2024-11-05 18:12:18.329572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.329638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.078 [2024-11-05 18:12:18.329651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:49.078 [2024-11-05 18:12:18.329662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:49.078 [2024-11-05 18:12:18.329671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.329691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.078 [2024-11-05 18:12:18.329702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:49.078 [2024-11-05 18:12:18.329712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:49.078 [2024-11-05 18:12:18.329731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.329765] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:49.078 [2024-11-05 18:12:18.329781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.078 [2024-11-05 18:12:18.329791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:49.078 [2024-11-05 18:12:18.329802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:49.078 [2024-11-05 18:12:18.329811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.363400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.078 [2024-11-05 18:12:18.363444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:49.078 [2024-11-05 18:12:18.363458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.624 ms 00:22:49.078 [2024-11-05 18:12:18.363474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.363544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.078 [2024-11-05 18:12:18.363556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:49.078 [2024-11-05 18:12:18.363567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:49.078 [2024-11-05 18:12:18.363577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.078 [2024-11-05 18:12:18.364708] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 367.455 ms, result 0 00:22:50.459  [2024-11-05T18:12:20.722Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-05T18:12:21.666Z] Copying: 46/1024 [MB] (23 MBps) [2024-11-05T18:12:22.631Z] Copying: 68/1024 [MB] (22 MBps) [2024-11-05T18:12:23.569Z] Copying: 91/1024 [MB] (23 MBps) [2024-11-05T18:12:24.506Z] Copying: 114/1024 [MB] (23 MBps) [2024-11-05T18:12:25.444Z] Copying: 137/1024 [MB] (22 MBps) [2024-11-05T18:12:26.383Z] Copying: 160/1024 [MB] (22 MBps) [2024-11-05T18:12:27.762Z] Copying: 183/1024 [MB] (23 MBps) [2024-11-05T18:12:28.699Z] Copying: 207/1024 [MB] (23 MBps) [2024-11-05T18:12:29.637Z] Copying: 230/1024 [MB] (23 MBps) [2024-11-05T18:12:30.576Z] Copying: 252/1024 [MB] (22 MBps) [2024-11-05T18:12:31.515Z] Copying: 275/1024 [MB] (22 MBps) [2024-11-05T18:12:32.453Z] Copying: 298/1024 [MB] (23 MBps) [2024-11-05T18:12:33.391Z] Copying: 322/1024 [MB] (23 MBps) [2024-11-05T18:12:34.770Z] Copying: 344/1024 [MB] (22 MBps) [2024-11-05T18:12:35.708Z] Copying: 367/1024 [MB] (22 MBps) [2024-11-05T18:12:36.649Z] Copying: 390/1024 [MB] (23 MBps) [2024-11-05T18:12:37.586Z] Copying: 415/1024 [MB] (24 MBps) [2024-11-05T18:12:38.525Z] Copying: 438/1024 [MB] (23 MBps) [2024-11-05T18:12:39.463Z] Copying: 461/1024 [MB] (22 MBps) [2024-11-05T18:12:40.400Z] Copying: 484/1024 [MB] (23 MBps) [2024-11-05T18:12:41.339Z] Copying: 507/1024 [MB] (23 MBps) [2024-11-05T18:12:42.718Z] Copying: 530/1024 [MB] (22 MBps) [2024-11-05T18:12:43.655Z] Copying: 553/1024 [MB] (23 MBps) [2024-11-05T18:12:44.593Z] Copying: 576/1024 [MB] (22 MBps) [2024-11-05T18:12:45.531Z] Copying: 598/1024 [MB] (22 MBps) [2024-11-05T18:12:46.470Z] Copying: 620/1024 [MB] (22 MBps) [2024-11-05T18:12:47.408Z] Copying: 643/1024 [MB] (22 MBps) [2024-11-05T18:12:48.346Z] Copying: 666/1024 [MB] (23 MBps) [2024-11-05T18:12:49.726Z] Copying: 689/1024 [MB] (22 MBps) [2024-11-05T18:12:50.680Z] Copying: 711/1024 [MB] (22 MBps) [2024-11-05T18:12:51.618Z] Copying: 734/1024 [MB] (22 MBps) [2024-11-05T18:12:52.556Z] Copying: 756/1024 [MB] (22 MBps) [2024-11-05T18:12:53.494Z] Copying: 778/1024 [MB] (21 MBps) [2024-11-05T18:12:54.433Z] Copying: 800/1024 [MB] (22 MBps) [2024-11-05T18:12:55.370Z] Copying: 823/1024 [MB] (22 MBps) [2024-11-05T18:12:56.749Z] Copying: 845/1024 [MB] (22 MBps) [2024-11-05T18:12:57.318Z] Copying: 868/1024 [MB] (22 MBps) [2024-11-05T18:12:58.698Z] Copying: 890/1024 [MB] (22 MBps) [2024-11-05T18:12:59.635Z] Copying: 913/1024 [MB] (22 MBps) [2024-11-05T18:13:00.573Z] Copying: 936/1024 [MB] (23 MBps) [2024-11-05T18:13:01.512Z] Copying: 960/1024 [MB] (23 MBps) [2024-11-05T18:13:02.449Z] Copying: 983/1024 [MB] (23 MBps) [2024-11-05T18:13:03.387Z] Copying: 1006/1024 [MB] (22 MBps) [2024-11-05T18:13:03.957Z] Copying: 1023/1024 [MB] (17 MBps) [2024-11-05T18:13:03.957Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-05 18:13:03.754352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.634 [2024-11-05 18:13:03.754541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:34.634 [2024-11-05 18:13:03.754646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:34.634 [2024-11-05 18:13:03.754693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.634 [2024-11-05 18:13:03.756484] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:34.634 [2024-11-05 18:13:03.761777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.634 [2024-11-05 18:13:03.761917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:34.634 [2024-11-05 18:13:03.761997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.136 ms 00:23:34.634 [2024-11-05 18:13:03.762034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.634 [2024-11-05 18:13:03.773427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.634 [2024-11-05 18:13:03.773568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:34.634 [2024-11-05 18:13:03.773674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.574 ms 00:23:34.634 [2024-11-05 18:13:03.773712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.634 [2024-11-05 18:13:03.796004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.634 [2024-11-05 18:13:03.796150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:34.634 [2024-11-05 18:13:03.796239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.263 ms 00:23:34.634 [2024-11-05 18:13:03.796277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.634 [2024-11-05 18:13:03.801093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.634 [2024-11-05 18:13:03.801221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:34.634 [2024-11-05 18:13:03.801382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.766 ms 00:23:34.634 [2024-11-05 18:13:03.801436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.634 [2024-11-05 18:13:03.835762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.634 [2024-11-05 18:13:03.835811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:34.634 [2024-11-05 18:13:03.835824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.292 ms 00:23:34.634 [2024-11-05 18:13:03.835834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.634 [2024-11-05 18:13:03.856162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.634 [2024-11-05 18:13:03.856207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:34.634 [2024-11-05 18:13:03.856220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.323 ms 00:23:34.634 [2024-11-05 18:13:03.856231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.895 [2024-11-05 18:13:03.974759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.895 [2024-11-05 18:13:03.974812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:34.895 [2024-11-05 18:13:03.974826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.680 ms 00:23:34.895 [2024-11-05 18:13:03.974837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.895 [2024-11-05 18:13:04.009870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.895 [2024-11-05 18:13:04.009904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:34.895 [2024-11-05 18:13:04.009917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.072 ms 00:23:34.895 [2024-11-05 18:13:04.009926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.895 [2024-11-05 18:13:04.044170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.895 [2024-11-05 18:13:04.044226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:34.895 [2024-11-05 18:13:04.044238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.263 ms 00:23:34.895 [2024-11-05 18:13:04.044263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.895 [2024-11-05 18:13:04.077660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.895 [2024-11-05 18:13:04.077694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:34.895 [2024-11-05 18:13:04.077706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.413 ms 00:23:34.895 [2024-11-05 18:13:04.077737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.895 [2024-11-05 18:13:04.110943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.895 [2024-11-05 18:13:04.110977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:34.895 [2024-11-05 18:13:04.110989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.186 ms 00:23:34.895 [2024-11-05 18:13:04.111014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.895 [2024-11-05 18:13:04.111050] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:34.895 [2024-11-05 18:13:04.111065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102144 / 261120 wr_cnt: 1 state: open 00:23:34.895 [2024-11-05 18:13:04.111079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:34.895 [2024-11-05 18:13:04.111828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.111838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.111849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.111876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.111887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.111898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.111908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.111918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.111929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.111939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.111950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.111961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.111971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.111982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.111992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:34.896 [2024-11-05 18:13:04.112261] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:34.896 [2024-11-05 18:13:04.112272] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f1fabc7-dfe3-4e4a-be3d-af24c10698b1 00:23:34.896 [2024-11-05 18:13:04.112283] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102144 00:23:34.896 [2024-11-05 18:13:04.112293] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103104 00:23:34.896 [2024-11-05 18:13:04.112303] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102144 00:23:34.896 [2024-11-05 18:13:04.112314] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:23:34.896 [2024-11-05 18:13:04.112323] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:34.896 [2024-11-05 18:13:04.112338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:34.896 [2024-11-05 18:13:04.112358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:34.896 [2024-11-05 18:13:04.112368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:34.896 [2024-11-05 18:13:04.112377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:34.896 [2024-11-05 18:13:04.112386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.896 [2024-11-05 18:13:04.112396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:34.896 [2024-11-05 18:13:04.112407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.340 ms 00:23:34.896 [2024-11-05 18:13:04.112416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.896 [2024-11-05 18:13:04.130986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.896 [2024-11-05 18:13:04.131016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:34.896 [2024-11-05 18:13:04.131035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.554 ms 00:23:34.896 [2024-11-05 18:13:04.131047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.896 [2024-11-05 18:13:04.131543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.896 [2024-11-05 18:13:04.131555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:34.896 [2024-11-05 18:13:04.131567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:23:34.896 [2024-11-05 18:13:04.131576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.896 [2024-11-05 18:13:04.179325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.896 [2024-11-05 18:13:04.179508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:34.896 [2024-11-05 18:13:04.179536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.896 [2024-11-05 18:13:04.179546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.896 [2024-11-05 18:13:04.179597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.896 [2024-11-05 18:13:04.179608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:34.896 [2024-11-05 18:13:04.179618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.896 [2024-11-05 18:13:04.179628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.896 [2024-11-05 18:13:04.179690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.896 [2024-11-05 18:13:04.179703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:34.896 [2024-11-05 18:13:04.179714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.896 [2024-11-05 18:13:04.179728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.896 [2024-11-05 18:13:04.179744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:34.896 [2024-11-05 18:13:04.179755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:34.896 [2024-11-05 18:13:04.179765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:34.896 [2024-11-05 18:13:04.179774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.163 [2024-11-05 18:13:04.296713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.163 [2024-11-05 18:13:04.296763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:35.163 [2024-11-05 18:13:04.296782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.163 [2024-11-05 18:13:04.296809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.163 [2024-11-05 18:13:04.391167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.163 [2024-11-05 18:13:04.391358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:35.163 [2024-11-05 18:13:04.391538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.163 [2024-11-05 18:13:04.391577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.163 [2024-11-05 18:13:04.391680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.163 [2024-11-05 18:13:04.391770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:35.163 [2024-11-05 18:13:04.391808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.163 [2024-11-05 18:13:04.391838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.163 [2024-11-05 18:13:04.391951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.163 [2024-11-05 18:13:04.391988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:35.163 [2024-11-05 18:13:04.392065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.163 [2024-11-05 18:13:04.392099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.163 [2024-11-05 18:13:04.392248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.163 [2024-11-05 18:13:04.392356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:35.163 [2024-11-05 18:13:04.392370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.163 [2024-11-05 18:13:04.392380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.163 [2024-11-05 18:13:04.392439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.163 [2024-11-05 18:13:04.392453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:35.163 [2024-11-05 18:13:04.392464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.163 [2024-11-05 18:13:04.392473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.163 [2024-11-05 18:13:04.392510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.163 [2024-11-05 18:13:04.392521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:35.163 [2024-11-05 18:13:04.392532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.163 [2024-11-05 18:13:04.392541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.163 [2024-11-05 18:13:04.392586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:35.163 [2024-11-05 18:13:04.392598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:35.163 [2024-11-05 18:13:04.392609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:35.163 [2024-11-05 18:13:04.392619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.163 [2024-11-05 18:13:04.392734] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 641.771 ms, result 0 00:23:36.570 00:23:36.570 00:23:36.830 18:13:05 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:36.830 [2024-11-05 18:13:05.998949] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:23:36.830 [2024-11-05 18:13:05.999224] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77664 ] 00:23:37.089 [2024-11-05 18:13:06.178905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.089 [2024-11-05 18:13:06.285881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.348 [2024-11-05 18:13:06.627694] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:37.348 [2024-11-05 18:13:06.627765] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:37.609 [2024-11-05 18:13:06.787338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.609 [2024-11-05 18:13:06.787388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:37.609 [2024-11-05 18:13:06.787440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:37.609 [2024-11-05 18:13:06.787451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.609 [2024-11-05 18:13:06.787500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.609 [2024-11-05 18:13:06.787513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:37.609 [2024-11-05 18:13:06.787526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:37.609 [2024-11-05 18:13:06.787536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.609 [2024-11-05 18:13:06.787557] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:37.609 [2024-11-05 18:13:06.788502] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:37.609 [2024-11-05 18:13:06.788675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.609 [2024-11-05 18:13:06.788690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:37.609 [2024-11-05 18:13:06.788702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.122 ms 00:23:37.609 [2024-11-05 18:13:06.788713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.609 [2024-11-05 18:13:06.790159] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:37.609 [2024-11-05 18:13:06.808677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.609 [2024-11-05 18:13:06.808715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:37.609 [2024-11-05 18:13:06.808729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.548 ms 00:23:37.609 [2024-11-05 18:13:06.808738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.609 [2024-11-05 18:13:06.808800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.609 [2024-11-05 18:13:06.808812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:37.609 [2024-11-05 18:13:06.808822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:37.609 [2024-11-05 18:13:06.808832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.609 [2024-11-05 18:13:06.815705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.609 [2024-11-05 18:13:06.815733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:37.609 [2024-11-05 18:13:06.815744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.816 ms 00:23:37.609 [2024-11-05 18:13:06.815753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.609 [2024-11-05 18:13:06.815848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.609 [2024-11-05 18:13:06.815861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:37.609 [2024-11-05 18:13:06.815872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:37.609 [2024-11-05 18:13:06.815881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.609 [2024-11-05 18:13:06.815919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.609 [2024-11-05 18:13:06.815931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:37.609 [2024-11-05 18:13:06.815941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:37.609 [2024-11-05 18:13:06.815951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.609 [2024-11-05 18:13:06.815974] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:37.609 [2024-11-05 18:13:06.820854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.609 [2024-11-05 18:13:06.820997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:37.609 [2024-11-05 18:13:06.821127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.892 ms 00:23:37.609 [2024-11-05 18:13:06.821170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.609 [2024-11-05 18:13:06.821224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.609 [2024-11-05 18:13:06.821256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:37.609 [2024-11-05 18:13:06.821287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:37.609 [2024-11-05 18:13:06.821316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.609 [2024-11-05 18:13:06.821468] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:37.609 [2024-11-05 18:13:06.821526] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:37.609 [2024-11-05 18:13:06.821598] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:37.609 [2024-11-05 18:13:06.821771] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:37.609 [2024-11-05 18:13:06.821939] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:37.609 [2024-11-05 18:13:06.821954] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:37.609 [2024-11-05 18:13:06.821967] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:37.609 [2024-11-05 18:13:06.821981] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:37.609 [2024-11-05 18:13:06.821994] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:37.609 [2024-11-05 18:13:06.822005] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:37.609 [2024-11-05 18:13:06.822015] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:37.609 [2024-11-05 18:13:06.822025] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:37.609 [2024-11-05 18:13:06.822034] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:37.609 [2024-11-05 18:13:06.822051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.609 [2024-11-05 18:13:06.822062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:37.609 [2024-11-05 18:13:06.822072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:23:37.609 [2024-11-05 18:13:06.822082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.609 [2024-11-05 18:13:06.822165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.609 [2024-11-05 18:13:06.822175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:37.609 [2024-11-05 18:13:06.822186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:37.609 [2024-11-05 18:13:06.822195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.609 [2024-11-05 18:13:06.822287] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:37.610 [2024-11-05 18:13:06.822305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:37.610 [2024-11-05 18:13:06.822316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:37.610 [2024-11-05 18:13:06.822326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:37.610 [2024-11-05 18:13:06.822336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:37.610 [2024-11-05 18:13:06.822345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:37.610 [2024-11-05 18:13:06.822355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:37.610 [2024-11-05 18:13:06.822364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:37.610 [2024-11-05 18:13:06.822373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:37.610 [2024-11-05 18:13:06.822382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:37.610 [2024-11-05 18:13:06.822391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:37.610 [2024-11-05 18:13:06.822400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:37.610 [2024-11-05 18:13:06.822430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:37.610 [2024-11-05 18:13:06.822440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:37.610 [2024-11-05 18:13:06.822450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:37.610 [2024-11-05 18:13:06.822468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:37.610 [2024-11-05 18:13:06.822480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:37.610 [2024-11-05 18:13:06.822490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:37.610 [2024-11-05 18:13:06.822499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:37.610 [2024-11-05 18:13:06.822510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:37.610 [2024-11-05 18:13:06.822519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:37.610 [2024-11-05 18:13:06.822528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:37.610 [2024-11-05 18:13:06.822538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:37.610 [2024-11-05 18:13:06.822547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:37.610 [2024-11-05 18:13:06.822556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:37.610 [2024-11-05 18:13:06.822566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:37.610 [2024-11-05 18:13:06.822575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:37.610 [2024-11-05 18:13:06.822584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:37.610 [2024-11-05 18:13:06.822593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:37.610 [2024-11-05 18:13:06.822603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:37.610 [2024-11-05 18:13:06.822612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:37.610 [2024-11-05 18:13:06.822621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:37.610 [2024-11-05 18:13:06.822630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:37.610 [2024-11-05 18:13:06.822639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:37.610 [2024-11-05 18:13:06.822648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:37.610 [2024-11-05 18:13:06.822657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:37.610 [2024-11-05 18:13:06.822666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:37.610 [2024-11-05 18:13:06.822675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:37.610 [2024-11-05 18:13:06.822685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:37.610 [2024-11-05 18:13:06.822693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:37.610 [2024-11-05 18:13:06.822702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:37.610 [2024-11-05 18:13:06.822713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:37.610 [2024-11-05 18:13:06.822722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:37.610 [2024-11-05 18:13:06.822731] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:37.610 [2024-11-05 18:13:06.822740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:37.610 [2024-11-05 18:13:06.822750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:37.610 [2024-11-05 18:13:06.822759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:37.610 [2024-11-05 18:13:06.822769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:37.610 [2024-11-05 18:13:06.822780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:37.610 [2024-11-05 18:13:06.822790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:37.610 [2024-11-05 18:13:06.822799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:37.610 [2024-11-05 18:13:06.822808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:37.610 [2024-11-05 18:13:06.822818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:37.610 [2024-11-05 18:13:06.822828] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:37.610 [2024-11-05 18:13:06.822840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:37.610 [2024-11-05 18:13:06.822851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:37.610 [2024-11-05 18:13:06.822862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:37.610 [2024-11-05 18:13:06.822872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:37.610 [2024-11-05 18:13:06.822882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:37.610 [2024-11-05 18:13:06.822892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:37.610 [2024-11-05 18:13:06.822902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:37.610 [2024-11-05 18:13:06.822912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:37.610 [2024-11-05 18:13:06.822922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:37.610 [2024-11-05 18:13:06.822933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:37.610 [2024-11-05 18:13:06.822943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:37.610 [2024-11-05 18:13:06.822953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:37.610 [2024-11-05 18:13:06.822963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:37.610 [2024-11-05 18:13:06.822973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:37.610 [2024-11-05 18:13:06.822983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:37.610 [2024-11-05 18:13:06.822993] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:37.610 [2024-11-05 18:13:06.823007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:37.610 [2024-11-05 18:13:06.823018] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:37.610 [2024-11-05 18:13:06.823028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:37.610 [2024-11-05 18:13:06.823038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:37.610 [2024-11-05 18:13:06.823048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:37.610 [2024-11-05 18:13:06.823058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.610 [2024-11-05 18:13:06.823068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:37.610 [2024-11-05 18:13:06.823079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:23:37.610 [2024-11-05 18:13:06.823089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.610 [2024-11-05 18:13:06.861902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.610 [2024-11-05 18:13:06.861938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:37.610 [2024-11-05 18:13:06.861952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.830 ms 00:23:37.610 [2024-11-05 18:13:06.861962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.610 [2024-11-05 18:13:06.862038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.610 [2024-11-05 18:13:06.862049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:37.610 [2024-11-05 18:13:06.862060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:37.610 [2024-11-05 18:13:06.862070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:06.935329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:06.935364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:37.871 [2024-11-05 18:13:06.935377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.324 ms 00:23:37.871 [2024-11-05 18:13:06.935387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:06.935435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:06.935463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:37.871 [2024-11-05 18:13:06.935474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:37.871 [2024-11-05 18:13:06.935488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:06.936005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:06.936025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:37.871 [2024-11-05 18:13:06.936036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:23:37.871 [2024-11-05 18:13:06.936046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:06.936162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:06.936176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:37.871 [2024-11-05 18:13:06.936186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:37.871 [2024-11-05 18:13:06.936200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:06.954579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:06.954763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:37.871 [2024-11-05 18:13:06.954791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.389 ms 00:23:37.871 [2024-11-05 18:13:06.954802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:06.973452] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:37.871 [2024-11-05 18:13:06.973617] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:37.871 [2024-11-05 18:13:06.973637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:06.973648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:37.871 [2024-11-05 18:13:06.973660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.766 ms 00:23:37.871 [2024-11-05 18:13:06.973670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:07.003450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:07.003489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:37.871 [2024-11-05 18:13:07.003503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.741 ms 00:23:37.871 [2024-11-05 18:13:07.003514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:07.021483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:07.021529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:37.871 [2024-11-05 18:13:07.021541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.954 ms 00:23:37.871 [2024-11-05 18:13:07.021566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:07.039031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:07.039192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:37.871 [2024-11-05 18:13:07.039212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.456 ms 00:23:37.871 [2024-11-05 18:13:07.039223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:07.040043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:07.040069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:37.871 [2024-11-05 18:13:07.040081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:23:37.871 [2024-11-05 18:13:07.040095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:07.121822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:07.121885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:37.871 [2024-11-05 18:13:07.121908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.838 ms 00:23:37.871 [2024-11-05 18:13:07.121918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:07.132140] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:37.871 [2024-11-05 18:13:07.134505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:07.134545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:37.871 [2024-11-05 18:13:07.134558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.561 ms 00:23:37.871 [2024-11-05 18:13:07.134584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:07.134661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:07.134675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:37.871 [2024-11-05 18:13:07.134687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:37.871 [2024-11-05 18:13:07.134701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:07.136166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:07.136205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:37.871 [2024-11-05 18:13:07.136217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:23:37.871 [2024-11-05 18:13:07.136227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:07.136255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:07.136265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:37.871 [2024-11-05 18:13:07.136277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:37.871 [2024-11-05 18:13:07.136287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:07.136326] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:37.871 [2024-11-05 18:13:07.136341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:07.136351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:37.871 [2024-11-05 18:13:07.136361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:37.871 [2024-11-05 18:13:07.136372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:07.170356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:07.170394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:37.871 [2024-11-05 18:13:07.170417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.020 ms 00:23:37.871 [2024-11-05 18:13:07.170433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:07.170520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:37.871 [2024-11-05 18:13:07.170545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:37.871 [2024-11-05 18:13:07.170556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:37.871 [2024-11-05 18:13:07.170566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:37.871 [2024-11-05 18:13:07.171713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 384.542 ms, result 0 00:23:39.252  [2024-11-05T18:13:09.513Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-05T18:13:10.451Z] Copying: 44/1024 [MB] (25 MBps) [2024-11-05T18:13:11.389Z] Copying: 69/1024 [MB] (24 MBps) [2024-11-05T18:13:12.768Z] Copying: 93/1024 [MB] (24 MBps) [2024-11-05T18:13:13.704Z] Copying: 118/1024 [MB] (24 MBps) [2024-11-05T18:13:14.642Z] Copying: 140/1024 [MB] (22 MBps) [2024-11-05T18:13:15.579Z] Copying: 163/1024 [MB] (22 MBps) [2024-11-05T18:13:16.517Z] Copying: 186/1024 [MB] (22 MBps) [2024-11-05T18:13:17.454Z] Copying: 209/1024 [MB] (23 MBps) [2024-11-05T18:13:18.393Z] Copying: 234/1024 [MB] (25 MBps) [2024-11-05T18:13:19.368Z] Copying: 259/1024 [MB] (24 MBps) [2024-11-05T18:13:20.746Z] Copying: 284/1024 [MB] (25 MBps) [2024-11-05T18:13:21.683Z] Copying: 309/1024 [MB] (25 MBps) [2024-11-05T18:13:22.620Z] Copying: 335/1024 [MB] (25 MBps) [2024-11-05T18:13:23.558Z] Copying: 360/1024 [MB] (25 MBps) [2024-11-05T18:13:24.496Z] Copying: 385/1024 [MB] (25 MBps) [2024-11-05T18:13:25.434Z] Copying: 410/1024 [MB] (24 MBps) [2024-11-05T18:13:26.371Z] Copying: 434/1024 [MB] (24 MBps) [2024-11-05T18:13:27.750Z] Copying: 458/1024 [MB] (24 MBps) [2024-11-05T18:13:28.686Z] Copying: 482/1024 [MB] (24 MBps) [2024-11-05T18:13:29.623Z] Copying: 507/1024 [MB] (24 MBps) [2024-11-05T18:13:30.561Z] Copying: 530/1024 [MB] (23 MBps) [2024-11-05T18:13:31.498Z] Copying: 555/1024 [MB] (24 MBps) [2024-11-05T18:13:32.436Z] Copying: 580/1024 [MB] (24 MBps) [2024-11-05T18:13:33.379Z] Copying: 604/1024 [MB] (24 MBps) [2024-11-05T18:13:34.760Z] Copying: 628/1024 [MB] (24 MBps) [2024-11-05T18:13:35.699Z] Copying: 654/1024 [MB] (25 MBps) [2024-11-05T18:13:36.636Z] Copying: 680/1024 [MB] (25 MBps) [2024-11-05T18:13:37.574Z] Copying: 705/1024 [MB] (25 MBps) [2024-11-05T18:13:38.511Z] Copying: 730/1024 [MB] (25 MBps) [2024-11-05T18:13:39.450Z] Copying: 755/1024 [MB] (24 MBps) [2024-11-05T18:13:40.389Z] Copying: 779/1024 [MB] (24 MBps) [2024-11-05T18:13:41.768Z] Copying: 805/1024 [MB] (25 MBps) [2024-11-05T18:13:42.338Z] Copying: 830/1024 [MB] (25 MBps) [2024-11-05T18:13:43.717Z] Copying: 854/1024 [MB] (24 MBps) [2024-11-05T18:13:44.656Z] Copying: 879/1024 [MB] (24 MBps) [2024-11-05T18:13:45.594Z] Copying: 903/1024 [MB] (24 MBps) [2024-11-05T18:13:46.532Z] Copying: 927/1024 [MB] (24 MBps) [2024-11-05T18:13:47.477Z] Copying: 951/1024 [MB] (24 MBps) [2024-11-05T18:13:48.422Z] Copying: 976/1024 [MB] (25 MBps) [2024-11-05T18:13:49.361Z] Copying: 1001/1024 [MB] (25 MBps) [2024-11-05T18:13:49.361Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-05 18:13:49.248526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.038 [2024-11-05 18:13:49.248609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:20.038 [2024-11-05 18:13:49.248630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:20.038 [2024-11-05 18:13:49.248644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.038 [2024-11-05 18:13:49.248692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:20.038 [2024-11-05 18:13:49.255112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.038 [2024-11-05 18:13:49.255269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:20.038 [2024-11-05 18:13:49.255353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.403 ms 00:24:20.038 [2024-11-05 18:13:49.255392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.038 [2024-11-05 18:13:49.255663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.038 [2024-11-05 18:13:49.255713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:20.038 [2024-11-05 18:13:49.255747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:24:20.038 [2024-11-05 18:13:49.255832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.038 [2024-11-05 18:13:49.260926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.038 [2024-11-05 18:13:49.261076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:20.038 [2024-11-05 18:13:49.261161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.046 ms 00:24:20.038 [2024-11-05 18:13:49.261199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.038 [2024-11-05 18:13:49.266691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.038 [2024-11-05 18:13:49.266811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:20.038 [2024-11-05 18:13:49.266947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.442 ms 00:24:20.038 [2024-11-05 18:13:49.266963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.038 [2024-11-05 18:13:49.301928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.038 [2024-11-05 18:13:49.301968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:20.038 [2024-11-05 18:13:49.301981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.973 ms 00:24:20.038 [2024-11-05 18:13:49.302006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.038 [2024-11-05 18:13:49.323210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.038 [2024-11-05 18:13:49.323372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:20.038 [2024-11-05 18:13:49.323392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.202 ms 00:24:20.038 [2024-11-05 18:13:49.323403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.299 [2024-11-05 18:13:49.481149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.299 [2024-11-05 18:13:49.481282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:20.299 [2024-11-05 18:13:49.481303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 157.949 ms 00:24:20.299 [2024-11-05 18:13:49.481314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.299 [2024-11-05 18:13:49.517990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.299 [2024-11-05 18:13:49.518028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:20.299 [2024-11-05 18:13:49.518041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.715 ms 00:24:20.299 [2024-11-05 18:13:49.518051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.299 [2024-11-05 18:13:49.553101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.299 [2024-11-05 18:13:49.553135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:20.299 [2024-11-05 18:13:49.553159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.072 ms 00:24:20.299 [2024-11-05 18:13:49.553168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.299 [2024-11-05 18:13:49.586643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.299 [2024-11-05 18:13:49.586687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:20.299 [2024-11-05 18:13:49.586699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.494 ms 00:24:20.299 [2024-11-05 18:13:49.586724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.299 [2024-11-05 18:13:49.620575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.299 [2024-11-05 18:13:49.620704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:20.299 [2024-11-05 18:13:49.620723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.834 ms 00:24:20.300 [2024-11-05 18:13:49.620749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.300 [2024-11-05 18:13:49.620794] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:20.300 [2024-11-05 18:13:49.620821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:20.300 [2024-11-05 18:13:49.620836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.620993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:20.300 [2024-11-05 18:13:49.621345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:20.301 [2024-11-05 18:13:49.621900] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:20.301 [2024-11-05 18:13:49.621910] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f1fabc7-dfe3-4e4a-be3d-af24c10698b1 00:24:20.301 [2024-11-05 18:13:49.621920] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:20.301 [2024-11-05 18:13:49.621930] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 29888 00:24:20.301 [2024-11-05 18:13:49.621939] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 28928 00:24:20.301 [2024-11-05 18:13:49.621950] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0332 00:24:20.301 [2024-11-05 18:13:49.621959] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:20.301 [2024-11-05 18:13:49.621974] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:20.301 [2024-11-05 18:13:49.621983] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:20.301 [2024-11-05 18:13:49.622000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:20.301 [2024-11-05 18:13:49.622009] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:20.301 [2024-11-05 18:13:49.622019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.301 [2024-11-05 18:13:49.622029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:20.301 [2024-11-05 18:13:49.622039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:24:20.301 [2024-11-05 18:13:49.622048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.560 [2024-11-05 18:13:49.640735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.560 [2024-11-05 18:13:49.640767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:20.560 [2024-11-05 18:13:49.640778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.680 ms 00:24:20.560 [2024-11-05 18:13:49.640793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.560 [2024-11-05 18:13:49.641294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.560 [2024-11-05 18:13:49.641308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:20.560 [2024-11-05 18:13:49.641318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:24:20.560 [2024-11-05 18:13:49.641327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.560 [2024-11-05 18:13:49.689536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.561 [2024-11-05 18:13:49.689569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:20.561 [2024-11-05 18:13:49.689587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.561 [2024-11-05 18:13:49.689596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.561 [2024-11-05 18:13:49.689653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.561 [2024-11-05 18:13:49.689666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:20.561 [2024-11-05 18:13:49.689676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.561 [2024-11-05 18:13:49.689685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.561 [2024-11-05 18:13:49.689776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.561 [2024-11-05 18:13:49.689790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:20.561 [2024-11-05 18:13:49.689800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.561 [2024-11-05 18:13:49.689814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.561 [2024-11-05 18:13:49.689830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.561 [2024-11-05 18:13:49.689840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:20.561 [2024-11-05 18:13:49.689850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.561 [2024-11-05 18:13:49.689859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.561 [2024-11-05 18:13:49.803081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.561 [2024-11-05 18:13:49.803148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:20.561 [2024-11-05 18:13:49.803168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.561 [2024-11-05 18:13:49.803177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.820 [2024-11-05 18:13:49.898718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.820 [2024-11-05 18:13:49.898922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:20.820 [2024-11-05 18:13:49.898943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.820 [2024-11-05 18:13:49.898954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.820 [2024-11-05 18:13:49.899037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.820 [2024-11-05 18:13:49.899051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:20.820 [2024-11-05 18:13:49.899061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.820 [2024-11-05 18:13:49.899071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.820 [2024-11-05 18:13:49.899110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.820 [2024-11-05 18:13:49.899120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:20.820 [2024-11-05 18:13:49.899130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.820 [2024-11-05 18:13:49.899139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.820 [2024-11-05 18:13:49.899260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.820 [2024-11-05 18:13:49.899272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:20.820 [2024-11-05 18:13:49.899283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.820 [2024-11-05 18:13:49.899292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.820 [2024-11-05 18:13:49.899349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.820 [2024-11-05 18:13:49.899364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:20.820 [2024-11-05 18:13:49.899374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.820 [2024-11-05 18:13:49.899383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.820 [2024-11-05 18:13:49.899439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.820 [2024-11-05 18:13:49.899451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:20.820 [2024-11-05 18:13:49.899461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.820 [2024-11-05 18:13:49.899471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.820 [2024-11-05 18:13:49.899527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.820 [2024-11-05 18:13:49.899543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:20.820 [2024-11-05 18:13:49.899553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.820 [2024-11-05 18:13:49.899562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.820 [2024-11-05 18:13:49.899730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 652.230 ms, result 0 00:24:21.758 00:24:21.758 00:24:21.758 18:13:50 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:23.665 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:23.665 18:13:52 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:23.665 18:13:52 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:23.665 18:13:52 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:23.665 18:13:52 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:23.665 18:13:52 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:23.665 Process with pid 75969 is not found 00:24:23.665 18:13:52 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 75969 00:24:23.665 18:13:52 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 75969 ']' 00:24:23.665 18:13:52 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 75969 00:24:23.665 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (75969) - No such process 00:24:23.665 18:13:52 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 75969 is not found' 00:24:23.665 18:13:52 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:23.665 Remove shared memory files 00:24:23.665 18:13:52 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:23.665 18:13:52 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:23.665 18:13:52 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:23.665 18:13:52 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:23.665 18:13:52 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:23.665 18:13:52 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:23.665 ************************************ 00:24:23.665 END TEST ftl_restore 00:24:23.665 ************************************ 00:24:23.665 00:24:23.665 real 3m30.221s 00:24:23.665 user 3m18.202s 00:24:23.665 sys 0m13.381s 00:24:23.665 18:13:52 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:23.665 18:13:52 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:23.665 18:13:52 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:23.665 18:13:52 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:24:23.665 18:13:52 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:23.665 18:13:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:23.665 ************************************ 00:24:23.665 START TEST ftl_dirty_shutdown 00:24:23.665 ************************************ 00:24:23.665 18:13:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:23.666 * Looking for test storage... 00:24:23.666 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:23.666 18:13:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:23.666 18:13:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:24:23.666 18:13:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:23.925 18:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:23.925 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.925 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.925 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.925 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.925 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.925 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.925 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.925 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.925 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:23.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.926 --rc genhtml_branch_coverage=1 00:24:23.926 --rc genhtml_function_coverage=1 00:24:23.926 --rc genhtml_legend=1 00:24:23.926 --rc geninfo_all_blocks=1 00:24:23.926 --rc geninfo_unexecuted_blocks=1 00:24:23.926 00:24:23.926 ' 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:23.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.926 --rc genhtml_branch_coverage=1 00:24:23.926 --rc genhtml_function_coverage=1 00:24:23.926 --rc genhtml_legend=1 00:24:23.926 --rc geninfo_all_blocks=1 00:24:23.926 --rc geninfo_unexecuted_blocks=1 00:24:23.926 00:24:23.926 ' 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:23.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.926 --rc genhtml_branch_coverage=1 00:24:23.926 --rc genhtml_function_coverage=1 00:24:23.926 --rc genhtml_legend=1 00:24:23.926 --rc geninfo_all_blocks=1 00:24:23.926 --rc geninfo_unexecuted_blocks=1 00:24:23.926 00:24:23.926 ' 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:23.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.926 --rc genhtml_branch_coverage=1 00:24:23.926 --rc genhtml_function_coverage=1 00:24:23.926 --rc genhtml_legend=1 00:24:23.926 --rc geninfo_all_blocks=1 00:24:23.926 --rc geninfo_unexecuted_blocks=1 00:24:23.926 00:24:23.926 ' 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78211 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78211 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:23.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 78211 ']' 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:23.926 18:13:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:23.926 [2024-11-05 18:13:53.225952] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:24:23.926 [2024-11-05 18:13:53.226089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78211 ] 00:24:24.186 [2024-11-05 18:13:53.402509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.186 [2024-11-05 18:13:53.507600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.124 18:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:25.124 18:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:24:25.124 18:13:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:25.124 18:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:25.124 18:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:25.124 18:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:25.124 18:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:25.124 18:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:25.383 18:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:25.383 18:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:25.383 18:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:25.383 18:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:24:25.383 18:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:25.383 18:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:25.383 18:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:25.383 18:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:25.642 18:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:25.642 { 00:24:25.642 "name": "nvme0n1", 00:24:25.642 "aliases": [ 00:24:25.642 "58aeacee-3ea9-458b-a4ea-965860f62d3b" 00:24:25.642 ], 00:24:25.642 "product_name": "NVMe disk", 00:24:25.642 "block_size": 4096, 00:24:25.642 "num_blocks": 1310720, 00:24:25.642 "uuid": "58aeacee-3ea9-458b-a4ea-965860f62d3b", 00:24:25.642 "numa_id": -1, 00:24:25.643 "assigned_rate_limits": { 00:24:25.643 "rw_ios_per_sec": 0, 00:24:25.643 "rw_mbytes_per_sec": 0, 00:24:25.643 "r_mbytes_per_sec": 0, 00:24:25.643 "w_mbytes_per_sec": 0 00:24:25.643 }, 00:24:25.643 "claimed": true, 00:24:25.643 "claim_type": "read_many_write_one", 00:24:25.643 "zoned": false, 00:24:25.643 "supported_io_types": { 00:24:25.643 "read": true, 00:24:25.643 "write": true, 00:24:25.643 "unmap": true, 00:24:25.643 "flush": true, 00:24:25.643 "reset": true, 00:24:25.643 "nvme_admin": true, 00:24:25.643 "nvme_io": true, 00:24:25.643 "nvme_io_md": false, 00:24:25.643 "write_zeroes": true, 00:24:25.643 "zcopy": false, 00:24:25.643 "get_zone_info": false, 00:24:25.643 "zone_management": false, 00:24:25.643 "zone_append": false, 00:24:25.643 "compare": true, 00:24:25.643 "compare_and_write": false, 00:24:25.643 "abort": true, 00:24:25.643 "seek_hole": false, 00:24:25.643 "seek_data": false, 00:24:25.643 "copy": true, 00:24:25.643 "nvme_iov_md": false 00:24:25.643 }, 00:24:25.643 "driver_specific": { 00:24:25.643 "nvme": [ 00:24:25.643 { 00:24:25.643 "pci_address": "0000:00:11.0", 00:24:25.643 "trid": { 00:24:25.643 "trtype": "PCIe", 00:24:25.643 "traddr": "0000:00:11.0" 00:24:25.643 }, 00:24:25.643 "ctrlr_data": { 00:24:25.643 "cntlid": 0, 00:24:25.643 "vendor_id": "0x1b36", 00:24:25.643 "model_number": "QEMU NVMe Ctrl", 00:24:25.643 "serial_number": "12341", 00:24:25.643 "firmware_revision": "8.0.0", 00:24:25.643 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:25.643 "oacs": { 00:24:25.643 "security": 0, 00:24:25.643 "format": 1, 00:24:25.643 "firmware": 0, 00:24:25.643 "ns_manage": 1 00:24:25.643 }, 00:24:25.643 "multi_ctrlr": false, 00:24:25.643 "ana_reporting": false 00:24:25.643 }, 00:24:25.643 "vs": { 00:24:25.643 "nvme_version": "1.4" 00:24:25.643 }, 00:24:25.643 "ns_data": { 00:24:25.643 "id": 1, 00:24:25.643 "can_share": false 00:24:25.643 } 00:24:25.643 } 00:24:25.643 ], 00:24:25.643 "mp_policy": "active_passive" 00:24:25.643 } 00:24:25.643 } 00:24:25.643 ]' 00:24:25.643 18:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:25.643 18:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:25.643 18:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:25.643 18:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:24:25.643 18:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:24:25.643 18:13:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:24:25.643 18:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:25.643 18:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:25.643 18:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:25.643 18:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:25.643 18:13:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:25.902 18:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=52602b4b-50d9-4cd6-bb96-71b2c0c63c4f 00:24:25.902 18:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:25.902 18:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 52602b4b-50d9-4cd6-bb96-71b2c0c63c4f 00:24:26.161 18:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:26.421 18:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=b7d51c16-b956-4ff7-bb02-b16cb0607592 00:24:26.421 18:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b7d51c16-b956-4ff7-bb02-b16cb0607592 00:24:26.680 18:13:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54 00:24:26.680 18:13:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:26.680 18:13:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54 00:24:26.681 18:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:26.681 18:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:26.681 18:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54 00:24:26.681 18:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:26.681 18:13:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54 00:24:26.681 18:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54 00:24:26.681 18:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:26.681 18:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:26.681 18:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:26.681 18:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54 00:24:26.681 18:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:26.681 { 00:24:26.681 "name": "7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54", 00:24:26.681 "aliases": [ 00:24:26.681 "lvs/nvme0n1p0" 00:24:26.681 ], 00:24:26.681 "product_name": "Logical Volume", 00:24:26.681 "block_size": 4096, 00:24:26.681 "num_blocks": 26476544, 00:24:26.681 "uuid": "7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54", 00:24:26.681 "assigned_rate_limits": { 00:24:26.681 "rw_ios_per_sec": 0, 00:24:26.681 "rw_mbytes_per_sec": 0, 00:24:26.681 "r_mbytes_per_sec": 0, 00:24:26.681 "w_mbytes_per_sec": 0 00:24:26.681 }, 00:24:26.681 "claimed": false, 00:24:26.681 "zoned": false, 00:24:26.681 "supported_io_types": { 00:24:26.681 "read": true, 00:24:26.681 "write": true, 00:24:26.681 "unmap": true, 00:24:26.681 "flush": false, 00:24:26.681 "reset": true, 00:24:26.681 "nvme_admin": false, 00:24:26.681 "nvme_io": false, 00:24:26.681 "nvme_io_md": false, 00:24:26.681 "write_zeroes": true, 00:24:26.681 "zcopy": false, 00:24:26.681 "get_zone_info": false, 00:24:26.681 "zone_management": false, 00:24:26.681 "zone_append": false, 00:24:26.681 "compare": false, 00:24:26.681 "compare_and_write": false, 00:24:26.681 "abort": false, 00:24:26.681 "seek_hole": true, 00:24:26.681 "seek_data": true, 00:24:26.681 "copy": false, 00:24:26.681 "nvme_iov_md": false 00:24:26.681 }, 00:24:26.681 "driver_specific": { 00:24:26.681 "lvol": { 00:24:26.681 "lvol_store_uuid": "b7d51c16-b956-4ff7-bb02-b16cb0607592", 00:24:26.681 "base_bdev": "nvme0n1", 00:24:26.681 "thin_provision": true, 00:24:26.681 "num_allocated_clusters": 0, 00:24:26.681 "snapshot": false, 00:24:26.681 "clone": false, 00:24:26.681 "esnap_clone": false 00:24:26.681 } 00:24:26.681 } 00:24:26.681 } 00:24:26.681 ]' 00:24:26.681 18:13:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:26.940 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:26.940 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:26.940 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:26.940 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:26.940 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:24:26.940 18:13:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:26.940 18:13:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:26.940 18:13:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:27.200 18:13:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:27.200 18:13:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:27.200 18:13:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54 00:24:27.200 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54 00:24:27.200 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:27.200 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:27.200 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:27.200 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54 00:24:27.459 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:27.459 { 00:24:27.459 "name": "7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54", 00:24:27.459 "aliases": [ 00:24:27.459 "lvs/nvme0n1p0" 00:24:27.459 ], 00:24:27.459 "product_name": "Logical Volume", 00:24:27.459 "block_size": 4096, 00:24:27.459 "num_blocks": 26476544, 00:24:27.459 "uuid": "7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54", 00:24:27.459 "assigned_rate_limits": { 00:24:27.459 "rw_ios_per_sec": 0, 00:24:27.459 "rw_mbytes_per_sec": 0, 00:24:27.459 "r_mbytes_per_sec": 0, 00:24:27.459 "w_mbytes_per_sec": 0 00:24:27.459 }, 00:24:27.459 "claimed": false, 00:24:27.459 "zoned": false, 00:24:27.459 "supported_io_types": { 00:24:27.459 "read": true, 00:24:27.459 "write": true, 00:24:27.459 "unmap": true, 00:24:27.459 "flush": false, 00:24:27.459 "reset": true, 00:24:27.459 "nvme_admin": false, 00:24:27.459 "nvme_io": false, 00:24:27.459 "nvme_io_md": false, 00:24:27.459 "write_zeroes": true, 00:24:27.459 "zcopy": false, 00:24:27.459 "get_zone_info": false, 00:24:27.459 "zone_management": false, 00:24:27.459 "zone_append": false, 00:24:27.459 "compare": false, 00:24:27.459 "compare_and_write": false, 00:24:27.459 "abort": false, 00:24:27.459 "seek_hole": true, 00:24:27.459 "seek_data": true, 00:24:27.459 "copy": false, 00:24:27.459 "nvme_iov_md": false 00:24:27.459 }, 00:24:27.459 "driver_specific": { 00:24:27.459 "lvol": { 00:24:27.459 "lvol_store_uuid": "b7d51c16-b956-4ff7-bb02-b16cb0607592", 00:24:27.459 "base_bdev": "nvme0n1", 00:24:27.459 "thin_provision": true, 00:24:27.459 "num_allocated_clusters": 0, 00:24:27.459 "snapshot": false, 00:24:27.459 "clone": false, 00:24:27.459 "esnap_clone": false 00:24:27.459 } 00:24:27.459 } 00:24:27.459 } 00:24:27.459 ]' 00:24:27.459 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:27.459 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:27.459 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:27.459 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:27.459 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:27.459 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:24:27.459 18:13:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:27.459 18:13:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:27.719 18:13:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:27.719 18:13:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54 00:24:27.719 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54 00:24:27.719 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:24:27.719 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:24:27.719 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:24:27.719 18:13:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54 00:24:27.719 18:13:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:24:27.719 { 00:24:27.719 "name": "7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54", 00:24:27.719 "aliases": [ 00:24:27.719 "lvs/nvme0n1p0" 00:24:27.719 ], 00:24:27.719 "product_name": "Logical Volume", 00:24:27.719 "block_size": 4096, 00:24:27.719 "num_blocks": 26476544, 00:24:27.719 "uuid": "7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54", 00:24:27.719 "assigned_rate_limits": { 00:24:27.719 "rw_ios_per_sec": 0, 00:24:27.719 "rw_mbytes_per_sec": 0, 00:24:27.719 "r_mbytes_per_sec": 0, 00:24:27.719 "w_mbytes_per_sec": 0 00:24:27.719 }, 00:24:27.719 "claimed": false, 00:24:27.719 "zoned": false, 00:24:27.719 "supported_io_types": { 00:24:27.719 "read": true, 00:24:27.719 "write": true, 00:24:27.719 "unmap": true, 00:24:27.719 "flush": false, 00:24:27.719 "reset": true, 00:24:27.719 "nvme_admin": false, 00:24:27.719 "nvme_io": false, 00:24:27.719 "nvme_io_md": false, 00:24:27.719 "write_zeroes": true, 00:24:27.719 "zcopy": false, 00:24:27.719 "get_zone_info": false, 00:24:27.719 "zone_management": false, 00:24:27.719 "zone_append": false, 00:24:27.719 "compare": false, 00:24:27.719 "compare_and_write": false, 00:24:27.719 "abort": false, 00:24:27.719 "seek_hole": true, 00:24:27.719 "seek_data": true, 00:24:27.719 "copy": false, 00:24:27.719 "nvme_iov_md": false 00:24:27.719 }, 00:24:27.719 "driver_specific": { 00:24:27.719 "lvol": { 00:24:27.719 "lvol_store_uuid": "b7d51c16-b956-4ff7-bb02-b16cb0607592", 00:24:27.719 "base_bdev": "nvme0n1", 00:24:27.719 "thin_provision": true, 00:24:27.719 "num_allocated_clusters": 0, 00:24:27.719 "snapshot": false, 00:24:27.719 "clone": false, 00:24:27.719 "esnap_clone": false 00:24:27.719 } 00:24:27.719 } 00:24:27.719 } 00:24:27.719 ]' 00:24:27.719 18:13:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:24:27.979 18:13:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:24:27.979 18:13:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:24:27.979 18:13:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:24:27.979 18:13:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:24:27.979 18:13:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:24:27.979 18:13:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:27.979 18:13:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54 --l2p_dram_limit 10' 00:24:27.979 18:13:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:27.979 18:13:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:27.979 18:13:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:27.979 18:13:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7f6b31ef-e3d6-4b64-bbae-cb6fcd9d1b54 --l2p_dram_limit 10 -c nvc0n1p0 00:24:27.979 [2024-11-05 18:13:57.273993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.979 [2024-11-05 18:13:57.274038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:27.979 [2024-11-05 18:13:57.274056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:27.979 [2024-11-05 18:13:57.274066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.979 [2024-11-05 18:13:57.274119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.979 [2024-11-05 18:13:57.274130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:27.979 [2024-11-05 18:13:57.274143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:27.979 [2024-11-05 18:13:57.274152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.979 [2024-11-05 18:13:57.274180] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:27.979 [2024-11-05 18:13:57.275133] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:27.979 [2024-11-05 18:13:57.275167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.980 [2024-11-05 18:13:57.275178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:27.980 [2024-11-05 18:13:57.275193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:24:27.980 [2024-11-05 18:13:57.275203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.980 [2024-11-05 18:13:57.275281] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 15ee1f9e-64f6-4c91-a0d6-bdf6d43b3bc5 00:24:27.980 [2024-11-05 18:13:57.276893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.980 [2024-11-05 18:13:57.277031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:27.980 [2024-11-05 18:13:57.277116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:27.980 [2024-11-05 18:13:57.277157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.980 [2024-11-05 18:13:57.284724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.980 [2024-11-05 18:13:57.284879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:27.980 [2024-11-05 18:13:57.285008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.500 ms 00:24:27.980 [2024-11-05 18:13:57.285051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.980 [2024-11-05 18:13:57.285172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.980 [2024-11-05 18:13:57.285262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:27.980 [2024-11-05 18:13:57.285298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:24:27.980 [2024-11-05 18:13:57.285336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.980 [2024-11-05 18:13:57.285464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.980 [2024-11-05 18:13:57.285510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:27.980 [2024-11-05 18:13:57.285592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:27.980 [2024-11-05 18:13:57.285634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.980 [2024-11-05 18:13:57.285684] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:27.980 [2024-11-05 18:13:57.290745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.980 [2024-11-05 18:13:57.290895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:27.980 [2024-11-05 18:13:57.291060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.073 ms 00:24:27.980 [2024-11-05 18:13:57.291148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.980 [2024-11-05 18:13:57.291213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.980 [2024-11-05 18:13:57.291282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:27.980 [2024-11-05 18:13:57.291322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:27.980 [2024-11-05 18:13:57.291351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.980 [2024-11-05 18:13:57.291406] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:27.980 [2024-11-05 18:13:57.291564] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:27.980 [2024-11-05 18:13:57.291587] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:27.980 [2024-11-05 18:13:57.291602] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:27.980 [2024-11-05 18:13:57.291618] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:27.980 [2024-11-05 18:13:57.291630] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:27.980 [2024-11-05 18:13:57.291644] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:27.980 [2024-11-05 18:13:57.291654] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:27.980 [2024-11-05 18:13:57.291669] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:27.980 [2024-11-05 18:13:57.291679] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:27.980 [2024-11-05 18:13:57.291693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.980 [2024-11-05 18:13:57.291704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:27.980 [2024-11-05 18:13:57.291717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:24:27.980 [2024-11-05 18:13:57.291736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.980 [2024-11-05 18:13:57.291814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.980 [2024-11-05 18:13:57.291826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:27.980 [2024-11-05 18:13:57.291839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:27.980 [2024-11-05 18:13:57.291850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.980 [2024-11-05 18:13:57.291943] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:27.980 [2024-11-05 18:13:57.291955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:27.980 [2024-11-05 18:13:57.291968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:27.980 [2024-11-05 18:13:57.291978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.980 [2024-11-05 18:13:57.291991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:27.980 [2024-11-05 18:13:57.292001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:27.980 [2024-11-05 18:13:57.292013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:27.980 [2024-11-05 18:13:57.292022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:27.980 [2024-11-05 18:13:57.292034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:27.980 [2024-11-05 18:13:57.292043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:27.980 [2024-11-05 18:13:57.292055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:27.980 [2024-11-05 18:13:57.292064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:27.980 [2024-11-05 18:13:57.292075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:27.980 [2024-11-05 18:13:57.292085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:27.980 [2024-11-05 18:13:57.292096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:27.980 [2024-11-05 18:13:57.292106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.980 [2024-11-05 18:13:57.292120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:27.980 [2024-11-05 18:13:57.292132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:27.980 [2024-11-05 18:13:57.292145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.980 [2024-11-05 18:13:57.292154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:27.980 [2024-11-05 18:13:57.292166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:27.980 [2024-11-05 18:13:57.292175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.980 [2024-11-05 18:13:57.292187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:27.980 [2024-11-05 18:13:57.292196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:27.980 [2024-11-05 18:13:57.292207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.980 [2024-11-05 18:13:57.292217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:27.980 [2024-11-05 18:13:57.292228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:27.980 [2024-11-05 18:13:57.292237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.980 [2024-11-05 18:13:57.292249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:27.980 [2024-11-05 18:13:57.292258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:27.980 [2024-11-05 18:13:57.292269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:27.980 [2024-11-05 18:13:57.292279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:27.980 [2024-11-05 18:13:57.292292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:27.980 [2024-11-05 18:13:57.292301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:27.980 [2024-11-05 18:13:57.292313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:27.980 [2024-11-05 18:13:57.292322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:27.980 [2024-11-05 18:13:57.292334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:27.980 [2024-11-05 18:13:57.292343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:27.980 [2024-11-05 18:13:57.292354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:27.980 [2024-11-05 18:13:57.292363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.980 [2024-11-05 18:13:57.292375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:27.980 [2024-11-05 18:13:57.292384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:27.980 [2024-11-05 18:13:57.292395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.980 [2024-11-05 18:13:57.292404] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:27.980 [2024-11-05 18:13:57.292610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:27.980 [2024-11-05 18:13:57.292644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:27.980 [2024-11-05 18:13:57.292679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:27.980 [2024-11-05 18:13:57.292709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:27.980 [2024-11-05 18:13:57.292796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:27.980 [2024-11-05 18:13:57.292831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:27.980 [2024-11-05 18:13:57.292865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:27.980 [2024-11-05 18:13:57.292895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:27.980 [2024-11-05 18:13:57.292927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:27.980 [2024-11-05 18:13:57.293004] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:27.980 [2024-11-05 18:13:57.293106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:27.980 [2024-11-05 18:13:57.293195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:27.980 [2024-11-05 18:13:57.293248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:27.981 [2024-11-05 18:13:57.293296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:27.981 [2024-11-05 18:13:57.293458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:27.981 [2024-11-05 18:13:57.293508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:27.981 [2024-11-05 18:13:57.293557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:27.981 [2024-11-05 18:13:57.293606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:27.981 [2024-11-05 18:13:57.293705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:27.981 [2024-11-05 18:13:57.293771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:27.981 [2024-11-05 18:13:57.293824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:27.981 [2024-11-05 18:13:57.293871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:27.981 [2024-11-05 18:13:57.294006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:27.981 [2024-11-05 18:13:57.294020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:27.981 [2024-11-05 18:13:57.294035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:27.981 [2024-11-05 18:13:57.294046] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:27.981 [2024-11-05 18:13:57.294060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:27.981 [2024-11-05 18:13:57.294072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:27.981 [2024-11-05 18:13:57.294085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:27.981 [2024-11-05 18:13:57.294096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:27.981 [2024-11-05 18:13:57.294109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:27.981 [2024-11-05 18:13:57.294121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.981 [2024-11-05 18:13:57.294134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:27.981 [2024-11-05 18:13:57.294146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.240 ms 00:24:27.981 [2024-11-05 18:13:57.294158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.981 [2024-11-05 18:13:57.294206] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:27.981 [2024-11-05 18:13:57.294223] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:32.178 [2024-11-05 18:14:01.086764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.178 [2024-11-05 18:14:01.086826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:32.178 [2024-11-05 18:14:01.086843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3798.714 ms 00:24:32.178 [2024-11-05 18:14:01.086855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.178 [2024-11-05 18:14:01.123048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.178 [2024-11-05 18:14:01.123102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:32.178 [2024-11-05 18:14:01.123117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.952 ms 00:24:32.178 [2024-11-05 18:14:01.123130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.178 [2024-11-05 18:14:01.123244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.178 [2024-11-05 18:14:01.123260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:32.178 [2024-11-05 18:14:01.123271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:32.178 [2024-11-05 18:14:01.123286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.178 [2024-11-05 18:14:01.166930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.178 [2024-11-05 18:14:01.166974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:32.178 [2024-11-05 18:14:01.166988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.675 ms 00:24:32.178 [2024-11-05 18:14:01.167001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.178 [2024-11-05 18:14:01.167032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.178 [2024-11-05 18:14:01.167049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:32.178 [2024-11-05 18:14:01.167059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:32.179 [2024-11-05 18:14:01.167071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.179 [2024-11-05 18:14:01.167568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.179 [2024-11-05 18:14:01.167589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:32.179 [2024-11-05 18:14:01.167599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:24:32.179 [2024-11-05 18:14:01.167611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.179 [2024-11-05 18:14:01.167701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.179 [2024-11-05 18:14:01.167714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:32.179 [2024-11-05 18:14:01.167728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:24:32.179 [2024-11-05 18:14:01.167744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.179 [2024-11-05 18:14:01.187167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.179 [2024-11-05 18:14:01.187207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:32.179 [2024-11-05 18:14:01.187220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.437 ms 00:24:32.179 [2024-11-05 18:14:01.187232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.179 [2024-11-05 18:14:01.199035] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:32.179 [2024-11-05 18:14:01.202149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.179 [2024-11-05 18:14:01.202314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:32.179 [2024-11-05 18:14:01.202340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.866 ms 00:24:32.179 [2024-11-05 18:14:01.202351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.179 [2024-11-05 18:14:01.314214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.179 [2024-11-05 18:14:01.314263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:32.179 [2024-11-05 18:14:01.314283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.008 ms 00:24:32.179 [2024-11-05 18:14:01.314294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.179 [2024-11-05 18:14:01.314567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.179 [2024-11-05 18:14:01.314588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:32.179 [2024-11-05 18:14:01.314606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:24:32.179 [2024-11-05 18:14:01.314616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.179 [2024-11-05 18:14:01.350018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.179 [2024-11-05 18:14:01.350055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:32.179 [2024-11-05 18:14:01.350072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.385 ms 00:24:32.179 [2024-11-05 18:14:01.350082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.179 [2024-11-05 18:14:01.384465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.179 [2024-11-05 18:14:01.384622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:32.179 [2024-11-05 18:14:01.384650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.392 ms 00:24:32.179 [2024-11-05 18:14:01.384661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.179 [2024-11-05 18:14:01.385307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.179 [2024-11-05 18:14:01.385329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:32.179 [2024-11-05 18:14:01.385344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:24:32.179 [2024-11-05 18:14:01.385354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.179 [2024-11-05 18:14:01.484752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.179 [2024-11-05 18:14:01.484789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:32.179 [2024-11-05 18:14:01.484809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.482 ms 00:24:32.179 [2024-11-05 18:14:01.484820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.439 [2024-11-05 18:14:01.519553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.439 [2024-11-05 18:14:01.519592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:32.439 [2024-11-05 18:14:01.519607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.710 ms 00:24:32.439 [2024-11-05 18:14:01.519617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.439 [2024-11-05 18:14:01.552691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.439 [2024-11-05 18:14:01.552737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:32.439 [2024-11-05 18:14:01.552754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.085 ms 00:24:32.439 [2024-11-05 18:14:01.552763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.439 [2024-11-05 18:14:01.586096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.439 [2024-11-05 18:14:01.586237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:32.439 [2024-11-05 18:14:01.586278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.345 ms 00:24:32.439 [2024-11-05 18:14:01.586290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.439 [2024-11-05 18:14:01.586333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.439 [2024-11-05 18:14:01.586345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:32.439 [2024-11-05 18:14:01.586361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:32.439 [2024-11-05 18:14:01.586372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.439 [2024-11-05 18:14:01.586485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:32.439 [2024-11-05 18:14:01.586499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:32.439 [2024-11-05 18:14:01.586516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:32.439 [2024-11-05 18:14:01.586527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:32.439 [2024-11-05 18:14:01.587611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4320.165 ms, result 0 00:24:32.439 { 00:24:32.439 "name": "ftl0", 00:24:32.439 "uuid": "15ee1f9e-64f6-4c91-a0d6-bdf6d43b3bc5" 00:24:32.439 } 00:24:32.439 18:14:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:32.439 18:14:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:32.698 18:14:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:32.698 18:14:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:32.698 18:14:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:32.979 /dev/nbd0 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:32.979 1+0 records in 00:24:32.979 1+0 records out 00:24:32.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309738 s, 13.2 MB/s 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:24:32.979 18:14:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:32.979 [2024-11-05 18:14:02.180869] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:24:32.979 [2024-11-05 18:14:02.180985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78361 ] 00:24:33.255 [2024-11-05 18:14:02.362935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.255 [2024-11-05 18:14:02.488948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.632  [2024-11-05T18:14:04.893Z] Copying: 206/1024 [MB] (206 MBps) [2024-11-05T18:14:06.272Z] Copying: 413/1024 [MB] (207 MBps) [2024-11-05T18:14:06.840Z] Copying: 621/1024 [MB] (207 MBps) [2024-11-05T18:14:08.217Z] Copying: 829/1024 [MB] (207 MBps) [2024-11-05T18:14:09.159Z] Copying: 1024/1024 [MB] (average 205 MBps) 00:24:39.836 00:24:39.836 18:14:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:41.742 18:14:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:24:41.742 [2024-11-05 18:14:10.777985] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:24:41.742 [2024-11-05 18:14:10.778099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78448 ] 00:24:41.742 [2024-11-05 18:14:10.957359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.001 [2024-11-05 18:14:11.089212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.380  [2024-11-05T18:14:13.640Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-05T18:14:14.578Z] Copying: 30/1024 [MB] (15 MBps) [2024-11-05T18:14:15.515Z] Copying: 45/1024 [MB] (15 MBps) [2024-11-05T18:14:16.453Z] Copying: 61/1024 [MB] (16 MBps) [2024-11-05T18:14:17.837Z] Copying: 78/1024 [MB] (16 MBps) [2024-11-05T18:14:18.775Z] Copying: 95/1024 [MB] (16 MBps) [2024-11-05T18:14:19.712Z] Copying: 112/1024 [MB] (17 MBps) [2024-11-05T18:14:20.649Z] Copying: 129/1024 [MB] (16 MBps) [2024-11-05T18:14:21.587Z] Copying: 145/1024 [MB] (16 MBps) [2024-11-05T18:14:22.524Z] Copying: 163/1024 [MB] (17 MBps) [2024-11-05T18:14:23.461Z] Copying: 180/1024 [MB] (16 MBps) [2024-11-05T18:14:24.839Z] Copying: 197/1024 [MB] (17 MBps) [2024-11-05T18:14:25.777Z] Copying: 214/1024 [MB] (16 MBps) [2024-11-05T18:14:26.714Z] Copying: 231/1024 [MB] (17 MBps) [2024-11-05T18:14:27.652Z] Copying: 248/1024 [MB] (17 MBps) [2024-11-05T18:14:28.589Z] Copying: 265/1024 [MB] (17 MBps) [2024-11-05T18:14:29.526Z] Copying: 282/1024 [MB] (16 MBps) [2024-11-05T18:14:30.462Z] Copying: 299/1024 [MB] (17 MBps) [2024-11-05T18:14:31.844Z] Copying: 317/1024 [MB] (17 MBps) [2024-11-05T18:14:32.411Z] Copying: 334/1024 [MB] (17 MBps) [2024-11-05T18:14:33.789Z] Copying: 351/1024 [MB] (16 MBps) [2024-11-05T18:14:34.727Z] Copying: 368/1024 [MB] (16 MBps) [2024-11-05T18:14:35.663Z] Copying: 385/1024 [MB] (17 MBps) [2024-11-05T18:14:36.600Z] Copying: 403/1024 [MB] (17 MBps) [2024-11-05T18:14:37.537Z] Copying: 420/1024 [MB] (17 MBps) [2024-11-05T18:14:38.474Z] Copying: 437/1024 [MB] (17 MBps) [2024-11-05T18:14:39.410Z] Copying: 453/1024 [MB] (16 MBps) [2024-11-05T18:14:40.787Z] Copying: 470/1024 [MB] (16 MBps) [2024-11-05T18:14:41.725Z] Copying: 487/1024 [MB] (17 MBps) [2024-11-05T18:14:42.661Z] Copying: 504/1024 [MB] (17 MBps) [2024-11-05T18:14:43.598Z] Copying: 522/1024 [MB] (17 MBps) [2024-11-05T18:14:44.536Z] Copying: 540/1024 [MB] (17 MBps) [2024-11-05T18:14:45.472Z] Copying: 557/1024 [MB] (17 MBps) [2024-11-05T18:14:46.411Z] Copying: 573/1024 [MB] (16 MBps) [2024-11-05T18:14:47.789Z] Copying: 590/1024 [MB] (16 MBps) [2024-11-05T18:14:48.725Z] Copying: 607/1024 [MB] (17 MBps) [2024-11-05T18:14:49.661Z] Copying: 624/1024 [MB] (17 MBps) [2024-11-05T18:14:50.596Z] Copying: 641/1024 [MB] (17 MBps) [2024-11-05T18:14:51.533Z] Copying: 658/1024 [MB] (16 MBps) [2024-11-05T18:14:52.470Z] Copying: 674/1024 [MB] (16 MBps) [2024-11-05T18:14:53.408Z] Copying: 690/1024 [MB] (16 MBps) [2024-11-05T18:14:54.786Z] Copying: 706/1024 [MB] (16 MBps) [2024-11-05T18:14:55.723Z] Copying: 723/1024 [MB] (16 MBps) [2024-11-05T18:14:56.659Z] Copying: 740/1024 [MB] (16 MBps) [2024-11-05T18:14:57.597Z] Copying: 757/1024 [MB] (16 MBps) [2024-11-05T18:14:58.534Z] Copying: 774/1024 [MB] (16 MBps) [2024-11-05T18:14:59.471Z] Copying: 792/1024 [MB] (18 MBps) [2024-11-05T18:15:00.411Z] Copying: 809/1024 [MB] (17 MBps) [2024-11-05T18:15:01.789Z] Copying: 826/1024 [MB] (16 MBps) [2024-11-05T18:15:02.358Z] Copying: 843/1024 [MB] (16 MBps) [2024-11-05T18:15:03.735Z] Copying: 859/1024 [MB] (16 MBps) [2024-11-05T18:15:04.672Z] Copying: 876/1024 [MB] (16 MBps) [2024-11-05T18:15:05.609Z] Copying: 892/1024 [MB] (16 MBps) [2024-11-05T18:15:06.546Z] Copying: 909/1024 [MB] (16 MBps) [2024-11-05T18:15:07.489Z] Copying: 926/1024 [MB] (16 MBps) [2024-11-05T18:15:08.426Z] Copying: 942/1024 [MB] (16 MBps) [2024-11-05T18:15:09.363Z] Copying: 959/1024 [MB] (16 MBps) [2024-11-05T18:15:10.741Z] Copying: 976/1024 [MB] (16 MBps) [2024-11-05T18:15:11.678Z] Copying: 992/1024 [MB] (16 MBps) [2024-11-05T18:15:12.616Z] Copying: 1008/1024 [MB] (16 MBps) [2024-11-05T18:15:13.554Z] Copying: 1024/1024 [MB] (average 16 MBps) 00:25:44.231 00:25:44.231 18:15:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:44.231 18:15:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:44.491 18:15:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:44.750 [2024-11-05 18:15:13.847464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.750 [2024-11-05 18:15:13.847516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:44.750 [2024-11-05 18:15:13.847533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:44.750 [2024-11-05 18:15:13.847546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.750 [2024-11-05 18:15:13.847569] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:44.750 [2024-11-05 18:15:13.851798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.750 [2024-11-05 18:15:13.851966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:44.750 [2024-11-05 18:15:13.851994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.209 ms 00:25:44.751 [2024-11-05 18:15:13.852013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.751 [2024-11-05 18:15:13.854170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.751 [2024-11-05 18:15:13.854215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:44.751 [2024-11-05 18:15:13.854232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.113 ms 00:25:44.751 [2024-11-05 18:15:13.854242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.751 [2024-11-05 18:15:13.872342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.751 [2024-11-05 18:15:13.872385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:44.751 [2024-11-05 18:15:13.872400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.104 ms 00:25:44.751 [2024-11-05 18:15:13.872423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.751 [2024-11-05 18:15:13.877272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.751 [2024-11-05 18:15:13.877304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:44.751 [2024-11-05 18:15:13.877319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.816 ms 00:25:44.751 [2024-11-05 18:15:13.877329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.751 [2024-11-05 18:15:13.911467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.751 [2024-11-05 18:15:13.911505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:44.751 [2024-11-05 18:15:13.911521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.065 ms 00:25:44.751 [2024-11-05 18:15:13.911531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.751 [2024-11-05 18:15:13.931794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.751 [2024-11-05 18:15:13.931830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:44.751 [2024-11-05 18:15:13.931845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.250 ms 00:25:44.751 [2024-11-05 18:15:13.931858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.751 [2024-11-05 18:15:13.931996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.751 [2024-11-05 18:15:13.932009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:44.751 [2024-11-05 18:15:13.932022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:25:44.751 [2024-11-05 18:15:13.932031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.751 [2024-11-05 18:15:13.968168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.751 [2024-11-05 18:15:13.968204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:44.751 [2024-11-05 18:15:13.968221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.173 ms 00:25:44.751 [2024-11-05 18:15:13.968247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.751 [2024-11-05 18:15:14.003173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.751 [2024-11-05 18:15:14.003328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:44.751 [2024-11-05 18:15:14.003368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.938 ms 00:25:44.751 [2024-11-05 18:15:14.003379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.751 [2024-11-05 18:15:14.036949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.751 [2024-11-05 18:15:14.036993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:44.751 [2024-11-05 18:15:14.037009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.566 ms 00:25:44.751 [2024-11-05 18:15:14.037035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.751 [2024-11-05 18:15:14.070638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.751 [2024-11-05 18:15:14.070676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:44.751 [2024-11-05 18:15:14.070691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.565 ms 00:25:44.751 [2024-11-05 18:15:14.070700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.751 [2024-11-05 18:15:14.070738] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:44.751 [2024-11-05 18:15:14.070752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.070990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:44.751 [2024-11-05 18:15:14.071952] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:44.751 [2024-11-05 18:15:14.071964] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 15ee1f9e-64f6-4c91-a0d6-bdf6d43b3bc5 00:25:44.751 [2024-11-05 18:15:14.071975] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:44.751 [2024-11-05 18:15:14.071989] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:44.751 [2024-11-05 18:15:14.071998] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:44.751 [2024-11-05 18:15:14.072029] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:44.751 [2024-11-05 18:15:14.072039] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:44.751 [2024-11-05 18:15:14.072051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:44.751 [2024-11-05 18:15:14.072061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:44.751 [2024-11-05 18:15:14.072072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:44.751 [2024-11-05 18:15:14.072081] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:44.751 [2024-11-05 18:15:14.072093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.751 [2024-11-05 18:15:14.072103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:44.751 [2024-11-05 18:15:14.072116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.359 ms 00:25:44.751 [2024-11-05 18:15:14.072126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.010 [2024-11-05 18:15:14.091137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.010 [2024-11-05 18:15:14.091168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:45.010 [2024-11-05 18:15:14.091185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.987 ms 00:25:45.010 [2024-11-05 18:15:14.091194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.010 [2024-11-05 18:15:14.091704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.010 [2024-11-05 18:15:14.091716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:45.010 [2024-11-05 18:15:14.091728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.483 ms 00:25:45.010 [2024-11-05 18:15:14.091738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.010 [2024-11-05 18:15:14.151712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.010 [2024-11-05 18:15:14.151872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:45.010 [2024-11-05 18:15:14.151912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.010 [2024-11-05 18:15:14.151923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.010 [2024-11-05 18:15:14.152001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.010 [2024-11-05 18:15:14.152014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:45.010 [2024-11-05 18:15:14.152027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.010 [2024-11-05 18:15:14.152038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.010 [2024-11-05 18:15:14.152117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.010 [2024-11-05 18:15:14.152130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:45.010 [2024-11-05 18:15:14.152153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.010 [2024-11-05 18:15:14.152162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.010 [2024-11-05 18:15:14.152186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.010 [2024-11-05 18:15:14.152197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:45.010 [2024-11-05 18:15:14.152210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.010 [2024-11-05 18:15:14.152220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.010 [2024-11-05 18:15:14.268970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.010 [2024-11-05 18:15:14.269020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:45.010 [2024-11-05 18:15:14.269037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.010 [2024-11-05 18:15:14.269046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.272 [2024-11-05 18:15:14.362642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.272 [2024-11-05 18:15:14.362846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:45.272 [2024-11-05 18:15:14.362888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.272 [2024-11-05 18:15:14.362899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.272 [2024-11-05 18:15:14.363029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.272 [2024-11-05 18:15:14.363043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:45.272 [2024-11-05 18:15:14.363057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.272 [2024-11-05 18:15:14.363069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.272 [2024-11-05 18:15:14.363126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.272 [2024-11-05 18:15:14.363138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:45.272 [2024-11-05 18:15:14.363152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.272 [2024-11-05 18:15:14.363162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.272 [2024-11-05 18:15:14.363278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.272 [2024-11-05 18:15:14.363291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:45.272 [2024-11-05 18:15:14.363304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.272 [2024-11-05 18:15:14.363315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.272 [2024-11-05 18:15:14.363361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.272 [2024-11-05 18:15:14.363374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:45.272 [2024-11-05 18:15:14.363387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.272 [2024-11-05 18:15:14.363397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.272 [2024-11-05 18:15:14.363463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.272 [2024-11-05 18:15:14.363476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:45.272 [2024-11-05 18:15:14.363489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.272 [2024-11-05 18:15:14.363499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.272 [2024-11-05 18:15:14.363550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.272 [2024-11-05 18:15:14.363562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:45.272 [2024-11-05 18:15:14.363575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.272 [2024-11-05 18:15:14.363585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.272 [2024-11-05 18:15:14.363717] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 517.054 ms, result 0 00:25:45.272 true 00:25:45.272 18:15:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78211 00:25:45.272 18:15:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78211 00:25:45.272 18:15:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:45.272 [2024-11-05 18:15:14.488286] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:25:45.272 [2024-11-05 18:15:14.488618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79101 ] 00:25:45.532 [2024-11-05 18:15:14.667488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.532 [2024-11-05 18:15:14.776795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.910  [2024-11-05T18:15:17.170Z] Copying: 209/1024 [MB] (209 MBps) [2024-11-05T18:15:18.107Z] Copying: 427/1024 [MB] (218 MBps) [2024-11-05T18:15:19.486Z] Copying: 647/1024 [MB] (220 MBps) [2024-11-05T18:15:20.055Z] Copying: 866/1024 [MB] (218 MBps) [2024-11-05T18:15:20.991Z] Copying: 1024/1024 [MB] (average 216 MBps) 00:25:51.668 00:25:51.668 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78211 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:51.668 18:15:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:51.668 [2024-11-05 18:15:20.989033] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:25:51.668 [2024-11-05 18:15:20.989461] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79170 ] 00:25:51.927 [2024-11-05 18:15:21.167813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.186 [2024-11-05 18:15:21.278375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.445 [2024-11-05 18:15:21.618972] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:52.445 [2024-11-05 18:15:21.619040] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:52.445 [2024-11-05 18:15:21.684549] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:52.445 [2024-11-05 18:15:21.684854] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:52.445 [2024-11-05 18:15:21.685073] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:52.704 [2024-11-05 18:15:22.006185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.704 [2024-11-05 18:15:22.006231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:52.704 [2024-11-05 18:15:22.006245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:52.704 [2024-11-05 18:15:22.006255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.704 [2024-11-05 18:15:22.006303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.704 [2024-11-05 18:15:22.006314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:52.704 [2024-11-05 18:15:22.006324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:52.704 [2024-11-05 18:15:22.006333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.704 [2024-11-05 18:15:22.006353] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:52.704 [2024-11-05 18:15:22.007367] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:52.704 [2024-11-05 18:15:22.007396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.704 [2024-11-05 18:15:22.007416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:52.704 [2024-11-05 18:15:22.007427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:25:52.704 [2024-11-05 18:15:22.007437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.704 [2024-11-05 18:15:22.009007] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:52.704 [2024-11-05 18:15:22.026375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.704 [2024-11-05 18:15:22.026546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:52.704 [2024-11-05 18:15:22.026693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.396 ms 00:25:52.704 [2024-11-05 18:15:22.026731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.704 [2024-11-05 18:15:22.026807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.704 [2024-11-05 18:15:22.026847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:52.704 [2024-11-05 18:15:22.026879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:52.704 [2024-11-05 18:15:22.026965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.965 [2024-11-05 18:15:22.033698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.965 [2024-11-05 18:15:22.033826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:52.965 [2024-11-05 18:15:22.033964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.642 ms 00:25:52.965 [2024-11-05 18:15:22.034001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.965 [2024-11-05 18:15:22.034103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.965 [2024-11-05 18:15:22.034138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:52.965 [2024-11-05 18:15:22.034223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:52.965 [2024-11-05 18:15:22.034258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.965 [2024-11-05 18:15:22.034323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.965 [2024-11-05 18:15:22.034363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:52.965 [2024-11-05 18:15:22.034394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:52.965 [2024-11-05 18:15:22.034487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.965 [2024-11-05 18:15:22.034542] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:52.965 [2024-11-05 18:15:22.039179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.965 [2024-11-05 18:15:22.039210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:52.965 [2024-11-05 18:15:22.039222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.649 ms 00:25:52.965 [2024-11-05 18:15:22.039247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.965 [2024-11-05 18:15:22.039287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.965 [2024-11-05 18:15:22.039299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:52.965 [2024-11-05 18:15:22.039310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:52.965 [2024-11-05 18:15:22.039319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.965 [2024-11-05 18:15:22.039369] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:52.965 [2024-11-05 18:15:22.039394] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:52.965 [2024-11-05 18:15:22.039443] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:52.965 [2024-11-05 18:15:22.039461] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:52.965 [2024-11-05 18:15:22.039548] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:52.965 [2024-11-05 18:15:22.039562] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:52.965 [2024-11-05 18:15:22.039575] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:52.965 [2024-11-05 18:15:22.039588] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:52.965 [2024-11-05 18:15:22.039603] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:52.965 [2024-11-05 18:15:22.039614] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:52.965 [2024-11-05 18:15:22.039624] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:52.965 [2024-11-05 18:15:22.039634] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:52.965 [2024-11-05 18:15:22.039643] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:52.965 [2024-11-05 18:15:22.039653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.965 [2024-11-05 18:15:22.039663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:52.965 [2024-11-05 18:15:22.039673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:25:52.965 [2024-11-05 18:15:22.039682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.965 [2024-11-05 18:15:22.039751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.965 [2024-11-05 18:15:22.039765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:52.965 [2024-11-05 18:15:22.039776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:52.965 [2024-11-05 18:15:22.039785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.965 [2024-11-05 18:15:22.039875] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:52.965 [2024-11-05 18:15:22.039889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:52.965 [2024-11-05 18:15:22.039900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:52.965 [2024-11-05 18:15:22.039910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.965 [2024-11-05 18:15:22.039921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:52.965 [2024-11-05 18:15:22.039930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:52.965 [2024-11-05 18:15:22.039939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:52.965 [2024-11-05 18:15:22.039950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:52.965 [2024-11-05 18:15:22.039959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:52.965 [2024-11-05 18:15:22.039969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:52.965 [2024-11-05 18:15:22.039979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:52.965 [2024-11-05 18:15:22.039997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:52.966 [2024-11-05 18:15:22.040006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:52.966 [2024-11-05 18:15:22.040015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:52.966 [2024-11-05 18:15:22.040025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:52.966 [2024-11-05 18:15:22.040033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.966 [2024-11-05 18:15:22.040042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:52.966 [2024-11-05 18:15:22.040052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:52.966 [2024-11-05 18:15:22.040060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.966 [2024-11-05 18:15:22.040069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:52.966 [2024-11-05 18:15:22.040078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:52.966 [2024-11-05 18:15:22.040087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.966 [2024-11-05 18:15:22.040096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:52.966 [2024-11-05 18:15:22.040105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:52.966 [2024-11-05 18:15:22.040114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.966 [2024-11-05 18:15:22.040122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:52.966 [2024-11-05 18:15:22.040131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:52.966 [2024-11-05 18:15:22.040139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.966 [2024-11-05 18:15:22.040148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:52.966 [2024-11-05 18:15:22.040157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:52.966 [2024-11-05 18:15:22.040166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:52.966 [2024-11-05 18:15:22.040174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:52.966 [2024-11-05 18:15:22.040183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:52.966 [2024-11-05 18:15:22.040191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:52.966 [2024-11-05 18:15:22.040200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:52.966 [2024-11-05 18:15:22.040208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:52.966 [2024-11-05 18:15:22.040217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:52.966 [2024-11-05 18:15:22.040226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:52.966 [2024-11-05 18:15:22.040234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:52.966 [2024-11-05 18:15:22.040243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.966 [2024-11-05 18:15:22.040252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:52.966 [2024-11-05 18:15:22.040261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:52.966 [2024-11-05 18:15:22.040272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.966 [2024-11-05 18:15:22.040280] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:52.966 [2024-11-05 18:15:22.040290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:52.966 [2024-11-05 18:15:22.040300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:52.966 [2024-11-05 18:15:22.040313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:52.966 [2024-11-05 18:15:22.040323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:52.966 [2024-11-05 18:15:22.040332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:52.966 [2024-11-05 18:15:22.040340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:52.966 [2024-11-05 18:15:22.040349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:52.966 [2024-11-05 18:15:22.040358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:52.966 [2024-11-05 18:15:22.040368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:52.966 [2024-11-05 18:15:22.040378] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:52.966 [2024-11-05 18:15:22.040389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:52.966 [2024-11-05 18:15:22.040400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:52.966 [2024-11-05 18:15:22.040623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:52.966 [2024-11-05 18:15:22.040685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:52.966 [2024-11-05 18:15:22.040733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:52.966 [2024-11-05 18:15:22.040780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:52.966 [2024-11-05 18:15:22.040827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:52.966 [2024-11-05 18:15:22.040925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:52.966 [2024-11-05 18:15:22.040975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:52.966 [2024-11-05 18:15:22.041022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:52.966 [2024-11-05 18:15:22.041069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:52.966 [2024-11-05 18:15:22.041162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:52.966 [2024-11-05 18:15:22.041214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:52.966 [2024-11-05 18:15:22.041261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:52.966 [2024-11-05 18:15:22.041308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:52.966 [2024-11-05 18:15:22.041402] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:52.966 [2024-11-05 18:15:22.041467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:52.966 [2024-11-05 18:15:22.041520] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:52.966 [2024-11-05 18:15:22.041586] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:52.966 [2024-11-05 18:15:22.041749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:52.966 [2024-11-05 18:15:22.041834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:52.966 [2024-11-05 18:15:22.041886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.966 [2024-11-05 18:15:22.041916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:52.966 [2024-11-05 18:15:22.041948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.066 ms 00:25:52.966 [2024-11-05 18:15:22.042025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.966 [2024-11-05 18:15:22.080062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.966 [2024-11-05 18:15:22.080217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:52.966 [2024-11-05 18:15:22.080255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.015 ms 00:25:52.966 [2024-11-05 18:15:22.080266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.966 [2024-11-05 18:15:22.080353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.966 [2024-11-05 18:15:22.080374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:52.966 [2024-11-05 18:15:22.080385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:52.966 [2024-11-05 18:15:22.080394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.966 [2024-11-05 18:15:22.135532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.966 [2024-11-05 18:15:22.135718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:52.966 [2024-11-05 18:15:22.135825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.154 ms 00:25:52.966 [2024-11-05 18:15:22.135870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.966 [2024-11-05 18:15:22.135928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.966 [2024-11-05 18:15:22.136006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:52.966 [2024-11-05 18:15:22.136042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:52.966 [2024-11-05 18:15:22.136072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.966 [2024-11-05 18:15:22.136643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.966 [2024-11-05 18:15:22.136753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:52.966 [2024-11-05 18:15:22.136825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:25:52.966 [2024-11-05 18:15:22.136860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.966 [2024-11-05 18:15:22.137011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.966 [2024-11-05 18:15:22.137053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:52.966 [2024-11-05 18:15:22.137144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:25:52.966 [2024-11-05 18:15:22.137180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.966 [2024-11-05 18:15:22.156510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.966 [2024-11-05 18:15:22.156640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:52.966 [2024-11-05 18:15:22.156717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.315 ms 00:25:52.966 [2024-11-05 18:15:22.156753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.966 [2024-11-05 18:15:22.174817] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:52.966 [2024-11-05 18:15:22.175050] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:52.966 [2024-11-05 18:15:22.175089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.966 [2024-11-05 18:15:22.175113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:52.966 [2024-11-05 18:15:22.175139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.243 ms 00:25:52.966 [2024-11-05 18:15:22.175162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.967 [2024-11-05 18:15:22.204787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.967 [2024-11-05 18:15:22.204937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:52.967 [2024-11-05 18:15:22.204971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.604 ms 00:25:52.967 [2024-11-05 18:15:22.204983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.967 [2024-11-05 18:15:22.223045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.967 [2024-11-05 18:15:22.223094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:52.967 [2024-11-05 18:15:22.223108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.050 ms 00:25:52.967 [2024-11-05 18:15:22.223119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.967 [2024-11-05 18:15:22.240854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.967 [2024-11-05 18:15:22.240893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:52.967 [2024-11-05 18:15:22.240907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.726 ms 00:25:52.967 [2024-11-05 18:15:22.240917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.967 [2024-11-05 18:15:22.241783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:52.967 [2024-11-05 18:15:22.241804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:52.967 [2024-11-05 18:15:22.241815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:25:52.967 [2024-11-05 18:15:22.241825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.226 [2024-11-05 18:15:22.328321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.226 [2024-11-05 18:15:22.328390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:53.226 [2024-11-05 18:15:22.328425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.613 ms 00:25:53.226 [2024-11-05 18:15:22.328437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.226 [2024-11-05 18:15:22.339745] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:53.226 [2024-11-05 18:15:22.342972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.226 [2024-11-05 18:15:22.343008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:53.226 [2024-11-05 18:15:22.343023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.496 ms 00:25:53.226 [2024-11-05 18:15:22.343033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.226 [2024-11-05 18:15:22.343136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.226 [2024-11-05 18:15:22.343150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:53.226 [2024-11-05 18:15:22.343163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:53.226 [2024-11-05 18:15:22.343173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.226 [2024-11-05 18:15:22.343286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.226 [2024-11-05 18:15:22.343304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:53.226 [2024-11-05 18:15:22.343316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:53.226 [2024-11-05 18:15:22.343326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.226 [2024-11-05 18:15:22.343353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.226 [2024-11-05 18:15:22.343368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:53.226 [2024-11-05 18:15:22.343379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:53.226 [2024-11-05 18:15:22.343389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.226 [2024-11-05 18:15:22.343437] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:53.226 [2024-11-05 18:15:22.343451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.226 [2024-11-05 18:15:22.343461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:53.226 [2024-11-05 18:15:22.343471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:53.226 [2024-11-05 18:15:22.343481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.226 [2024-11-05 18:15:22.379010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.226 [2024-11-05 18:15:22.379065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:53.226 [2024-11-05 18:15:22.379085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.561 ms 00:25:53.226 [2024-11-05 18:15:22.379101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.226 [2024-11-05 18:15:22.379191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:53.226 [2024-11-05 18:15:22.379209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:53.226 [2024-11-05 18:15:22.379225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:53.226 [2024-11-05 18:15:22.379240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:53.226 [2024-11-05 18:15:22.380558] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.440 ms, result 0 00:25:54.164  [2024-11-05T18:15:24.426Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-05T18:15:25.805Z] Copying: 48/1024 [MB] (23 MBps) [2024-11-05T18:15:26.748Z] Copying: 72/1024 [MB] (24 MBps) [2024-11-05T18:15:27.688Z] Copying: 95/1024 [MB] (23 MBps) [2024-11-05T18:15:28.644Z] Copying: 118/1024 [MB] (23 MBps) [2024-11-05T18:15:29.583Z] Copying: 142/1024 [MB] (23 MBps) [2024-11-05T18:15:30.521Z] Copying: 165/1024 [MB] (23 MBps) [2024-11-05T18:15:31.460Z] Copying: 189/1024 [MB] (23 MBps) [2024-11-05T18:15:32.400Z] Copying: 213/1024 [MB] (24 MBps) [2024-11-05T18:15:33.781Z] Copying: 237/1024 [MB] (24 MBps) [2024-11-05T18:15:34.720Z] Copying: 261/1024 [MB] (23 MBps) [2024-11-05T18:15:35.663Z] Copying: 284/1024 [MB] (23 MBps) [2024-11-05T18:15:36.604Z] Copying: 308/1024 [MB] (23 MBps) [2024-11-05T18:15:37.543Z] Copying: 331/1024 [MB] (22 MBps) [2024-11-05T18:15:38.482Z] Copying: 354/1024 [MB] (23 MBps) [2024-11-05T18:15:39.421Z] Copying: 377/1024 [MB] (23 MBps) [2024-11-05T18:15:40.801Z] Copying: 401/1024 [MB] (23 MBps) [2024-11-05T18:15:41.371Z] Copying: 424/1024 [MB] (22 MBps) [2024-11-05T18:15:42.751Z] Copying: 446/1024 [MB] (22 MBps) [2024-11-05T18:15:43.710Z] Copying: 468/1024 [MB] (22 MBps) [2024-11-05T18:15:44.650Z] Copying: 491/1024 [MB] (22 MBps) [2024-11-05T18:15:45.612Z] Copying: 513/1024 [MB] (22 MBps) [2024-11-05T18:15:46.551Z] Copying: 537/1024 [MB] (23 MBps) [2024-11-05T18:15:47.490Z] Copying: 560/1024 [MB] (23 MBps) [2024-11-05T18:15:48.428Z] Copying: 583/1024 [MB] (22 MBps) [2024-11-05T18:15:49.367Z] Copying: 606/1024 [MB] (23 MBps) [2024-11-05T18:15:50.746Z] Copying: 630/1024 [MB] (23 MBps) [2024-11-05T18:15:51.684Z] Copying: 653/1024 [MB] (23 MBps) [2024-11-05T18:15:52.622Z] Copying: 677/1024 [MB] (23 MBps) [2024-11-05T18:15:53.560Z] Copying: 701/1024 [MB] (24 MBps) [2024-11-05T18:15:54.498Z] Copying: 725/1024 [MB] (23 MBps) [2024-11-05T18:15:55.435Z] Copying: 748/1024 [MB] (23 MBps) [2024-11-05T18:15:56.374Z] Copying: 772/1024 [MB] (23 MBps) [2024-11-05T18:15:57.768Z] Copying: 797/1024 [MB] (24 MBps) [2024-11-05T18:15:58.337Z] Copying: 821/1024 [MB] (24 MBps) [2024-11-05T18:15:59.725Z] Copying: 845/1024 [MB] (24 MBps) [2024-11-05T18:16:00.666Z] Copying: 869/1024 [MB] (23 MBps) [2024-11-05T18:16:01.603Z] Copying: 893/1024 [MB] (24 MBps) [2024-11-05T18:16:02.540Z] Copying: 917/1024 [MB] (24 MBps) [2024-11-05T18:16:03.478Z] Copying: 941/1024 [MB] (23 MBps) [2024-11-05T18:16:04.416Z] Copying: 964/1024 [MB] (23 MBps) [2024-11-05T18:16:05.354Z] Copying: 988/1024 [MB] (23 MBps) [2024-11-05T18:16:06.733Z] Copying: 1011/1024 [MB] (23 MBps) [2024-11-05T18:16:06.733Z] Copying: 1023/1024 [MB] (12 MBps) [2024-11-05T18:16:06.733Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-05 18:16:06.566046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.410 [2024-11-05 18:16:06.566127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:37.410 [2024-11-05 18:16:06.566159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:37.410 [2024-11-05 18:16:06.566171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.410 [2024-11-05 18:16:06.568650] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:37.410 [2024-11-05 18:16:06.573683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.410 [2024-11-05 18:16:06.573724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:37.410 [2024-11-05 18:16:06.573754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.993 ms 00:26:37.410 [2024-11-05 18:16:06.573765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.410 [2024-11-05 18:16:06.584221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.410 [2024-11-05 18:16:06.584260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:37.410 [2024-11-05 18:16:06.584272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.802 ms 00:26:37.410 [2024-11-05 18:16:06.584298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.410 [2024-11-05 18:16:06.607405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.410 [2024-11-05 18:16:06.607457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:37.410 [2024-11-05 18:16:06.607483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.127 ms 00:26:37.410 [2024-11-05 18:16:06.607493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.411 [2024-11-05 18:16:06.612506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.411 [2024-11-05 18:16:06.612546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:37.411 [2024-11-05 18:16:06.612559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.985 ms 00:26:37.411 [2024-11-05 18:16:06.612569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.411 [2024-11-05 18:16:06.649066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.411 [2024-11-05 18:16:06.649104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:37.411 [2024-11-05 18:16:06.649118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.515 ms 00:26:37.411 [2024-11-05 18:16:06.649127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.411 [2024-11-05 18:16:06.669585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.411 [2024-11-05 18:16:06.669760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:37.411 [2024-11-05 18:16:06.669782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.455 ms 00:26:37.411 [2024-11-05 18:16:06.669793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.671 [2024-11-05 18:16:06.788336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.671 [2024-11-05 18:16:06.788388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:37.671 [2024-11-05 18:16:06.788402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.677 ms 00:26:37.671 [2024-11-05 18:16:06.788433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.671 [2024-11-05 18:16:06.822882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.671 [2024-11-05 18:16:06.823020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:37.671 [2024-11-05 18:16:06.823040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.488 ms 00:26:37.671 [2024-11-05 18:16:06.823066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.671 [2024-11-05 18:16:06.857075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.671 [2024-11-05 18:16:06.857109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:37.671 [2024-11-05 18:16:06.857122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.012 ms 00:26:37.671 [2024-11-05 18:16:06.857131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.671 [2024-11-05 18:16:06.890870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.671 [2024-11-05 18:16:06.890904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:37.671 [2024-11-05 18:16:06.890916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.759 ms 00:26:37.671 [2024-11-05 18:16:06.890942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.671 [2024-11-05 18:16:06.924350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.671 [2024-11-05 18:16:06.924386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:37.671 [2024-11-05 18:16:06.924399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.378 ms 00:26:37.671 [2024-11-05 18:16:06.924434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.672 [2024-11-05 18:16:06.924469] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:37.672 [2024-11-05 18:16:06.924484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103168 / 261120 wr_cnt: 1 state: open 00:26:37.672 [2024-11-05 18:16:06.924497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.924991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:37.672 [2024-11-05 18:16:06.925410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:37.673 [2024-11-05 18:16:06.925420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:37.673 [2024-11-05 18:16:06.925442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:37.673 [2024-11-05 18:16:06.925452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:37.673 [2024-11-05 18:16:06.925462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:37.673 [2024-11-05 18:16:06.925473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:37.673 [2024-11-05 18:16:06.925484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:37.673 [2024-11-05 18:16:06.925499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:37.673 [2024-11-05 18:16:06.925510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:37.673 [2024-11-05 18:16:06.925520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:37.673 [2024-11-05 18:16:06.925530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:37.673 [2024-11-05 18:16:06.925540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:37.673 [2024-11-05 18:16:06.925551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:37.673 [2024-11-05 18:16:06.925568] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:37.673 [2024-11-05 18:16:06.925579] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 15ee1f9e-64f6-4c91-a0d6-bdf6d43b3bc5 00:26:37.673 [2024-11-05 18:16:06.925590] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103168 00:26:37.673 [2024-11-05 18:16:06.925605] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104128 00:26:37.673 [2024-11-05 18:16:06.925624] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103168 00:26:37.673 [2024-11-05 18:16:06.925635] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:26:37.673 [2024-11-05 18:16:06.925645] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:37.673 [2024-11-05 18:16:06.925655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:37.673 [2024-11-05 18:16:06.925665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:37.673 [2024-11-05 18:16:06.925674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:37.673 [2024-11-05 18:16:06.925683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:37.673 [2024-11-05 18:16:06.925693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.673 [2024-11-05 18:16:06.925703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:37.673 [2024-11-05 18:16:06.925713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:26:37.673 [2024-11-05 18:16:06.925732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.673 [2024-11-05 18:16:06.945298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.673 [2024-11-05 18:16:06.945331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:37.673 [2024-11-05 18:16:06.945344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.564 ms 00:26:37.673 [2024-11-05 18:16:06.945354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.673 [2024-11-05 18:16:06.945959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.673 [2024-11-05 18:16:06.945985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:37.673 [2024-11-05 18:16:06.945996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:26:37.673 [2024-11-05 18:16:06.946006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.933 [2024-11-05 18:16:06.996234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.933 [2024-11-05 18:16:06.996271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:37.933 [2024-11-05 18:16:06.996283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.933 [2024-11-05 18:16:06.996293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.933 [2024-11-05 18:16:06.996342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.933 [2024-11-05 18:16:06.996352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:37.933 [2024-11-05 18:16:06.996362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.933 [2024-11-05 18:16:06.996371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.933 [2024-11-05 18:16:06.996464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.933 [2024-11-05 18:16:06.996478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:37.933 [2024-11-05 18:16:06.996488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.933 [2024-11-05 18:16:06.996497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.933 [2024-11-05 18:16:06.996513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.933 [2024-11-05 18:16:06.996523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:37.933 [2024-11-05 18:16:06.996533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.933 [2024-11-05 18:16:06.996542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.933 [2024-11-05 18:16:07.114417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.933 [2024-11-05 18:16:07.114576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:37.933 [2024-11-05 18:16:07.114613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.933 [2024-11-05 18:16:07.114624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.933 [2024-11-05 18:16:07.207279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.933 [2024-11-05 18:16:07.207322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:37.933 [2024-11-05 18:16:07.207335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.933 [2024-11-05 18:16:07.207345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.933 [2024-11-05 18:16:07.207449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.933 [2024-11-05 18:16:07.207461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:37.933 [2024-11-05 18:16:07.207472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.933 [2024-11-05 18:16:07.207481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.933 [2024-11-05 18:16:07.207516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.933 [2024-11-05 18:16:07.207527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:37.933 [2024-11-05 18:16:07.207537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.933 [2024-11-05 18:16:07.207547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.933 [2024-11-05 18:16:07.207671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.933 [2024-11-05 18:16:07.207689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:37.933 [2024-11-05 18:16:07.207701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.933 [2024-11-05 18:16:07.207711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.933 [2024-11-05 18:16:07.207745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.933 [2024-11-05 18:16:07.207757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:37.933 [2024-11-05 18:16:07.207767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.933 [2024-11-05 18:16:07.207777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.933 [2024-11-05 18:16:07.207813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.933 [2024-11-05 18:16:07.207828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:37.933 [2024-11-05 18:16:07.207839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.933 [2024-11-05 18:16:07.207848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.933 [2024-11-05 18:16:07.207887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:37.933 [2024-11-05 18:16:07.207898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:37.933 [2024-11-05 18:16:07.207908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:37.933 [2024-11-05 18:16:07.207918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.933 [2024-11-05 18:16:07.208050] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 645.488 ms, result 0 00:26:39.313 00:26:39.313 00:26:39.313 18:16:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:41.219 18:16:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:41.219 [2024-11-05 18:16:10.245882] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:26:41.219 [2024-11-05 18:16:10.246352] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79669 ] 00:26:41.219 [2024-11-05 18:16:10.426843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.219 [2024-11-05 18:16:10.539083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.788 [2024-11-05 18:16:10.882251] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:41.788 [2024-11-05 18:16:10.882316] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:41.788 [2024-11-05 18:16:11.042464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.788 [2024-11-05 18:16:11.042511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:41.788 [2024-11-05 18:16:11.042533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:41.788 [2024-11-05 18:16:11.042543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.788 [2024-11-05 18:16:11.042588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.788 [2024-11-05 18:16:11.042601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:41.788 [2024-11-05 18:16:11.042615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:41.788 [2024-11-05 18:16:11.042624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.788 [2024-11-05 18:16:11.042645] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:41.788 [2024-11-05 18:16:11.043681] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:41.788 [2024-11-05 18:16:11.043711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.788 [2024-11-05 18:16:11.043722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:41.788 [2024-11-05 18:16:11.043733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:26:41.788 [2024-11-05 18:16:11.043743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.788 [2024-11-05 18:16:11.045185] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:41.788 [2024-11-05 18:16:11.062739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.788 [2024-11-05 18:16:11.062774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:41.788 [2024-11-05 18:16:11.062788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.582 ms 00:26:41.788 [2024-11-05 18:16:11.062815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.788 [2024-11-05 18:16:11.062878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.788 [2024-11-05 18:16:11.062890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:41.788 [2024-11-05 18:16:11.062901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:41.788 [2024-11-05 18:16:11.062911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.788 [2024-11-05 18:16:11.069800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.788 [2024-11-05 18:16:11.069936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:41.788 [2024-11-05 18:16:11.069956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.831 ms 00:26:41.788 [2024-11-05 18:16:11.069982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.788 [2024-11-05 18:16:11.070069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.788 [2024-11-05 18:16:11.070081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:41.788 [2024-11-05 18:16:11.070092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:41.788 [2024-11-05 18:16:11.070103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.788 [2024-11-05 18:16:11.070144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.788 [2024-11-05 18:16:11.070156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:41.789 [2024-11-05 18:16:11.070166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:41.789 [2024-11-05 18:16:11.070176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.789 [2024-11-05 18:16:11.070199] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:41.789 [2024-11-05 18:16:11.074833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.789 [2024-11-05 18:16:11.074864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:41.789 [2024-11-05 18:16:11.074877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.647 ms 00:26:41.789 [2024-11-05 18:16:11.074890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.789 [2024-11-05 18:16:11.074920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.789 [2024-11-05 18:16:11.074930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:41.789 [2024-11-05 18:16:11.074941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:41.789 [2024-11-05 18:16:11.074950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.789 [2024-11-05 18:16:11.075003] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:41.789 [2024-11-05 18:16:11.075026] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:41.789 [2024-11-05 18:16:11.075061] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:41.789 [2024-11-05 18:16:11.075088] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:41.789 [2024-11-05 18:16:11.075186] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:41.789 [2024-11-05 18:16:11.075199] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:41.789 [2024-11-05 18:16:11.075211] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:41.789 [2024-11-05 18:16:11.075223] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:41.789 [2024-11-05 18:16:11.075234] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:41.789 [2024-11-05 18:16:11.075244] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:41.789 [2024-11-05 18:16:11.075253] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:41.789 [2024-11-05 18:16:11.075263] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:41.789 [2024-11-05 18:16:11.075273] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:41.789 [2024-11-05 18:16:11.075286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.789 [2024-11-05 18:16:11.075295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:41.789 [2024-11-05 18:16:11.075305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:26:41.789 [2024-11-05 18:16:11.075314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.789 [2024-11-05 18:16:11.075380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.789 [2024-11-05 18:16:11.075389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:41.789 [2024-11-05 18:16:11.075399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:41.789 [2024-11-05 18:16:11.075408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:41.789 [2024-11-05 18:16:11.075512] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:41.789 [2024-11-05 18:16:11.075529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:41.789 [2024-11-05 18:16:11.075539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:41.789 [2024-11-05 18:16:11.075549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.789 [2024-11-05 18:16:11.075559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:41.789 [2024-11-05 18:16:11.075568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:41.789 [2024-11-05 18:16:11.075577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:41.789 [2024-11-05 18:16:11.075587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:41.789 [2024-11-05 18:16:11.075612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:41.789 [2024-11-05 18:16:11.075621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:41.789 [2024-11-05 18:16:11.075630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:41.789 [2024-11-05 18:16:11.075641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:41.789 [2024-11-05 18:16:11.075651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:41.789 [2024-11-05 18:16:11.075660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:41.789 [2024-11-05 18:16:11.075669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:41.789 [2024-11-05 18:16:11.075686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.789 [2024-11-05 18:16:11.075697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:41.789 [2024-11-05 18:16:11.075706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:41.789 [2024-11-05 18:16:11.075721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.789 [2024-11-05 18:16:11.075731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:41.789 [2024-11-05 18:16:11.075740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:41.789 [2024-11-05 18:16:11.075749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.789 [2024-11-05 18:16:11.075757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:41.789 [2024-11-05 18:16:11.075766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:41.789 [2024-11-05 18:16:11.075775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.789 [2024-11-05 18:16:11.075784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:41.789 [2024-11-05 18:16:11.075793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:41.789 [2024-11-05 18:16:11.075802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.789 [2024-11-05 18:16:11.075811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:41.789 [2024-11-05 18:16:11.075820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:41.789 [2024-11-05 18:16:11.075829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:41.789 [2024-11-05 18:16:11.075837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:41.789 [2024-11-05 18:16:11.075846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:41.789 [2024-11-05 18:16:11.075855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:41.789 [2024-11-05 18:16:11.075864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:41.789 [2024-11-05 18:16:11.075873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:41.789 [2024-11-05 18:16:11.075882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:41.789 [2024-11-05 18:16:11.075891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:41.789 [2024-11-05 18:16:11.075900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:41.789 [2024-11-05 18:16:11.075908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.789 [2024-11-05 18:16:11.075917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:41.789 [2024-11-05 18:16:11.075926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:41.789 [2024-11-05 18:16:11.075934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.789 [2024-11-05 18:16:11.075944] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:41.789 [2024-11-05 18:16:11.075954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:41.789 [2024-11-05 18:16:11.075964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:41.789 [2024-11-05 18:16:11.075990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:41.789 [2024-11-05 18:16:11.076000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:41.789 [2024-11-05 18:16:11.076010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:41.789 [2024-11-05 18:16:11.076019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:41.789 [2024-11-05 18:16:11.076029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:41.789 [2024-11-05 18:16:11.076037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:41.789 [2024-11-05 18:16:11.076047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:41.789 [2024-11-05 18:16:11.076058] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:41.789 [2024-11-05 18:16:11.076069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:41.789 [2024-11-05 18:16:11.076080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:41.789 [2024-11-05 18:16:11.076091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:41.789 [2024-11-05 18:16:11.076101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:41.789 [2024-11-05 18:16:11.076111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:41.789 [2024-11-05 18:16:11.076121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:41.789 [2024-11-05 18:16:11.076132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:41.789 [2024-11-05 18:16:11.076142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:41.789 [2024-11-05 18:16:11.076152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:41.789 [2024-11-05 18:16:11.076163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:41.790 [2024-11-05 18:16:11.076173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:41.790 [2024-11-05 18:16:11.076183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:41.790 [2024-11-05 18:16:11.076193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:41.790 [2024-11-05 18:16:11.076203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:41.790 [2024-11-05 18:16:11.076214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:41.790 [2024-11-05 18:16:11.076223] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:41.790 [2024-11-05 18:16:11.076238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:41.790 [2024-11-05 18:16:11.076249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:41.790 [2024-11-05 18:16:11.076259] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:41.790 [2024-11-05 18:16:11.076269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:41.790 [2024-11-05 18:16:11.076279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:41.790 [2024-11-05 18:16:11.076290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:41.790 [2024-11-05 18:16:11.076301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:41.790 [2024-11-05 18:16:11.076312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:26:41.790 [2024-11-05 18:16:11.076322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.049 [2024-11-05 18:16:11.112387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.049 [2024-11-05 18:16:11.112432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:42.049 [2024-11-05 18:16:11.112445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.079 ms 00:26:42.050 [2024-11-05 18:16:11.112455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.050 [2024-11-05 18:16:11.112529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.050 [2024-11-05 18:16:11.112553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:42.050 [2024-11-05 18:16:11.112563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:26:42.050 [2024-11-05 18:16:11.112573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.050 [2024-11-05 18:16:11.184588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.050 [2024-11-05 18:16:11.184626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:42.050 [2024-11-05 18:16:11.184640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.063 ms 00:26:42.050 [2024-11-05 18:16:11.184650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.050 [2024-11-05 18:16:11.184688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.050 [2024-11-05 18:16:11.184699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:42.050 [2024-11-05 18:16:11.184709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:42.050 [2024-11-05 18:16:11.184723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.050 [2024-11-05 18:16:11.185233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.050 [2024-11-05 18:16:11.185247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:42.050 [2024-11-05 18:16:11.185257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:26:42.050 [2024-11-05 18:16:11.185267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.050 [2024-11-05 18:16:11.185375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.050 [2024-11-05 18:16:11.185388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:42.050 [2024-11-05 18:16:11.185398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:26:42.050 [2024-11-05 18:16:11.185413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.050 [2024-11-05 18:16:11.202939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.050 [2024-11-05 18:16:11.203097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:42.050 [2024-11-05 18:16:11.203140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.517 ms 00:26:42.050 [2024-11-05 18:16:11.203152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.050 [2024-11-05 18:16:11.221227] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:42.050 [2024-11-05 18:16:11.221264] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:42.050 [2024-11-05 18:16:11.221279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.050 [2024-11-05 18:16:11.221289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:42.050 [2024-11-05 18:16:11.221300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.058 ms 00:26:42.050 [2024-11-05 18:16:11.221309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.050 [2024-11-05 18:16:11.249033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.050 [2024-11-05 18:16:11.249177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:42.050 [2024-11-05 18:16:11.249199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.728 ms 00:26:42.050 [2024-11-05 18:16:11.249225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.050 [2024-11-05 18:16:11.266142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.050 [2024-11-05 18:16:11.266187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:42.050 [2024-11-05 18:16:11.266200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.899 ms 00:26:42.050 [2024-11-05 18:16:11.266208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.050 [2024-11-05 18:16:11.283068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.050 [2024-11-05 18:16:11.283101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:42.050 [2024-11-05 18:16:11.283113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.852 ms 00:26:42.050 [2024-11-05 18:16:11.283122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.050 [2024-11-05 18:16:11.283852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.050 [2024-11-05 18:16:11.283878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:42.050 [2024-11-05 18:16:11.283890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:26:42.050 [2024-11-05 18:16:11.283903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.050 [2024-11-05 18:16:11.364830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.050 [2024-11-05 18:16:11.364886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:42.050 [2024-11-05 18:16:11.364908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.020 ms 00:26:42.050 [2024-11-05 18:16:11.364919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.312 [2024-11-05 18:16:11.375098] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:42.312 [2024-11-05 18:16:11.377532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.312 [2024-11-05 18:16:11.377560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:42.312 [2024-11-05 18:16:11.377573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.589 ms 00:26:42.312 [2024-11-05 18:16:11.377583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.312 [2024-11-05 18:16:11.377653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.312 [2024-11-05 18:16:11.377665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:42.312 [2024-11-05 18:16:11.377676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:42.312 [2024-11-05 18:16:11.377689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.312 [2024-11-05 18:16:11.379251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.312 [2024-11-05 18:16:11.379378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:42.312 [2024-11-05 18:16:11.379464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.515 ms 00:26:42.312 [2024-11-05 18:16:11.379501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.312 [2024-11-05 18:16:11.379552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.312 [2024-11-05 18:16:11.379686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:42.312 [2024-11-05 18:16:11.379762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:42.312 [2024-11-05 18:16:11.379791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.312 [2024-11-05 18:16:11.379849] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:42.312 [2024-11-05 18:16:11.379887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.312 [2024-11-05 18:16:11.379917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:42.312 [2024-11-05 18:16:11.379947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:42.312 [2024-11-05 18:16:11.379978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.312 [2024-11-05 18:16:11.413527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.312 [2024-11-05 18:16:11.413655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:42.312 [2024-11-05 18:16:11.413749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.499 ms 00:26:42.312 [2024-11-05 18:16:11.413773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.312 [2024-11-05 18:16:11.413842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:42.312 [2024-11-05 18:16:11.413855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:42.312 [2024-11-05 18:16:11.413866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:42.312 [2024-11-05 18:16:11.413876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:42.312 [2024-11-05 18:16:11.414905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 372.636 ms, result 0 00:26:43.692  [2024-11-05T18:16:13.953Z] Copying: 1260/1048576 [kB] (1260 kBps) [2024-11-05T18:16:14.890Z] Copying: 9208/1048576 [kB] (7948 kBps) [2024-11-05T18:16:15.828Z] Copying: 41/1024 [MB] (32 MBps) [2024-11-05T18:16:16.766Z] Copying: 73/1024 [MB] (32 MBps) [2024-11-05T18:16:17.705Z] Copying: 106/1024 [MB] (32 MBps) [2024-11-05T18:16:18.643Z] Copying: 139/1024 [MB] (32 MBps) [2024-11-05T18:16:20.022Z] Copying: 172/1024 [MB] (32 MBps) [2024-11-05T18:16:20.960Z] Copying: 205/1024 [MB] (32 MBps) [2024-11-05T18:16:21.896Z] Copying: 238/1024 [MB] (33 MBps) [2024-11-05T18:16:22.835Z] Copying: 271/1024 [MB] (33 MBps) [2024-11-05T18:16:23.773Z] Copying: 304/1024 [MB] (32 MBps) [2024-11-05T18:16:24.710Z] Copying: 336/1024 [MB] (32 MBps) [2024-11-05T18:16:25.648Z] Copying: 370/1024 [MB] (33 MBps) [2024-11-05T18:16:26.631Z] Copying: 404/1024 [MB] (34 MBps) [2024-11-05T18:16:28.012Z] Copying: 438/1024 [MB] (33 MBps) [2024-11-05T18:16:28.951Z] Copying: 471/1024 [MB] (33 MBps) [2024-11-05T18:16:29.890Z] Copying: 504/1024 [MB] (32 MBps) [2024-11-05T18:16:30.829Z] Copying: 536/1024 [MB] (32 MBps) [2024-11-05T18:16:31.768Z] Copying: 570/1024 [MB] (33 MBps) [2024-11-05T18:16:32.710Z] Copying: 602/1024 [MB] (32 MBps) [2024-11-05T18:16:33.649Z] Copying: 635/1024 [MB] (32 MBps) [2024-11-05T18:16:34.588Z] Copying: 667/1024 [MB] (32 MBps) [2024-11-05T18:16:35.968Z] Copying: 700/1024 [MB] (32 MBps) [2024-11-05T18:16:36.907Z] Copying: 731/1024 [MB] (31 MBps) [2024-11-05T18:16:37.847Z] Copying: 764/1024 [MB] (32 MBps) [2024-11-05T18:16:38.785Z] Copying: 796/1024 [MB] (32 MBps) [2024-11-05T18:16:39.722Z] Copying: 829/1024 [MB] (32 MBps) [2024-11-05T18:16:40.680Z] Copying: 861/1024 [MB] (32 MBps) [2024-11-05T18:16:41.618Z] Copying: 894/1024 [MB] (32 MBps) [2024-11-05T18:16:42.998Z] Copying: 926/1024 [MB] (32 MBps) [2024-11-05T18:16:43.567Z] Copying: 959/1024 [MB] (32 MBps) [2024-11-05T18:16:44.945Z] Copying: 992/1024 [MB] (32 MBps) [2024-11-05T18:16:44.945Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-05 18:16:44.834403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.622 [2024-11-05 18:16:44.834690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:15.622 [2024-11-05 18:16:44.834862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:15.622 [2024-11-05 18:16:44.835048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.622 [2024-11-05 18:16:44.835144] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:15.622 [2024-11-05 18:16:44.842168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.622 [2024-11-05 18:16:44.842378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:15.622 [2024-11-05 18:16:44.842804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.684 ms 00:27:15.622 [2024-11-05 18:16:44.842932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.622 [2024-11-05 18:16:44.843285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.622 [2024-11-05 18:16:44.843359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:15.622 [2024-11-05 18:16:44.843601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:27:15.622 [2024-11-05 18:16:44.843665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.622 [2024-11-05 18:16:44.857326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.622 [2024-11-05 18:16:44.857529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:15.622 [2024-11-05 18:16:44.857639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.614 ms 00:27:15.622 [2024-11-05 18:16:44.857683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.622 [2024-11-05 18:16:44.863554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.623 [2024-11-05 18:16:44.863740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:15.623 [2024-11-05 18:16:44.863844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.700 ms 00:27:15.623 [2024-11-05 18:16:44.863893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.623 [2024-11-05 18:16:44.898872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.623 [2024-11-05 18:16:44.899035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:15.623 [2024-11-05 18:16:44.899194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.962 ms 00:27:15.623 [2024-11-05 18:16:44.899234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.623 [2024-11-05 18:16:44.919260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.623 [2024-11-05 18:16:44.919398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:15.623 [2024-11-05 18:16:44.919508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.998 ms 00:27:15.623 [2024-11-05 18:16:44.919546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.623 [2024-11-05 18:16:44.921643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.623 [2024-11-05 18:16:44.921779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:15.623 [2024-11-05 18:16:44.921807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:27:15.623 [2024-11-05 18:16:44.921818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.883 [2024-11-05 18:16:44.956523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.883 [2024-11-05 18:16:44.956560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:15.883 [2024-11-05 18:16:44.956572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.732 ms 00:27:15.883 [2024-11-05 18:16:44.956581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.883 [2024-11-05 18:16:44.990333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.883 [2024-11-05 18:16:44.990508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:15.883 [2024-11-05 18:16:44.990542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.771 ms 00:27:15.883 [2024-11-05 18:16:44.990552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.883 [2024-11-05 18:16:45.023658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.883 [2024-11-05 18:16:45.023696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:15.883 [2024-11-05 18:16:45.023709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.122 ms 00:27:15.883 [2024-11-05 18:16:45.023718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.883 [2024-11-05 18:16:45.057315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.884 [2024-11-05 18:16:45.057350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:15.884 [2024-11-05 18:16:45.057363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.580 ms 00:27:15.884 [2024-11-05 18:16:45.057372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.884 [2024-11-05 18:16:45.057407] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:15.884 [2024-11-05 18:16:45.057433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:15.884 [2024-11-05 18:16:45.057445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:15.884 [2024-11-05 18:16:45.057455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.057999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:15.884 [2024-11-05 18:16:45.058324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:15.885 [2024-11-05 18:16:45.058503] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:15.885 [2024-11-05 18:16:45.058513] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 15ee1f9e-64f6-4c91-a0d6-bdf6d43b3bc5 00:27:15.885 [2024-11-05 18:16:45.058524] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:15.885 [2024-11-05 18:16:45.058534] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 161472 00:27:15.885 [2024-11-05 18:16:45.058544] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 159488 00:27:15.885 [2024-11-05 18:16:45.058558] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0124 00:27:15.885 [2024-11-05 18:16:45.058568] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:15.885 [2024-11-05 18:16:45.058578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:15.885 [2024-11-05 18:16:45.058588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:15.885 [2024-11-05 18:16:45.058606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:15.885 [2024-11-05 18:16:45.058616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:15.885 [2024-11-05 18:16:45.058625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.885 [2024-11-05 18:16:45.058651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:15.885 [2024-11-05 18:16:45.058661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:27:15.885 [2024-11-05 18:16:45.058671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.885 [2024-11-05 18:16:45.078102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.885 [2024-11-05 18:16:45.078144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:15.885 [2024-11-05 18:16:45.078157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.426 ms 00:27:15.885 [2024-11-05 18:16:45.078167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.885 [2024-11-05 18:16:45.078743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.885 [2024-11-05 18:16:45.078756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:15.885 [2024-11-05 18:16:45.078766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:27:15.885 [2024-11-05 18:16:45.078776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.885 [2024-11-05 18:16:45.129649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.885 [2024-11-05 18:16:45.129834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:15.885 [2024-11-05 18:16:45.129856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.885 [2024-11-05 18:16:45.129867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.885 [2024-11-05 18:16:45.129920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.885 [2024-11-05 18:16:45.129933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:15.885 [2024-11-05 18:16:45.129943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.885 [2024-11-05 18:16:45.129954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.885 [2024-11-05 18:16:45.130018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.885 [2024-11-05 18:16:45.130037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:15.885 [2024-11-05 18:16:45.130048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.885 [2024-11-05 18:16:45.130059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.885 [2024-11-05 18:16:45.130076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.885 [2024-11-05 18:16:45.130087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:15.885 [2024-11-05 18:16:45.130097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.885 [2024-11-05 18:16:45.130107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.145 [2024-11-05 18:16:45.249588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.145 [2024-11-05 18:16:45.249810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:16.145 [2024-11-05 18:16:45.249833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.145 [2024-11-05 18:16:45.249844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.145 [2024-11-05 18:16:45.343929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.145 [2024-11-05 18:16:45.344083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:16.145 [2024-11-05 18:16:45.344122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.145 [2024-11-05 18:16:45.344133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.145 [2024-11-05 18:16:45.344222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.145 [2024-11-05 18:16:45.344235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:16.145 [2024-11-05 18:16:45.344249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.145 [2024-11-05 18:16:45.344260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.145 [2024-11-05 18:16:45.344297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.145 [2024-11-05 18:16:45.344308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:16.145 [2024-11-05 18:16:45.344318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.145 [2024-11-05 18:16:45.344328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.145 [2024-11-05 18:16:45.344461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.145 [2024-11-05 18:16:45.344476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:16.145 [2024-11-05 18:16:45.344487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.145 [2024-11-05 18:16:45.344501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.145 [2024-11-05 18:16:45.344542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.145 [2024-11-05 18:16:45.344555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:16.145 [2024-11-05 18:16:45.344567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.145 [2024-11-05 18:16:45.344577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.145 [2024-11-05 18:16:45.344614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.145 [2024-11-05 18:16:45.344626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:16.145 [2024-11-05 18:16:45.344636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.145 [2024-11-05 18:16:45.344650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.145 [2024-11-05 18:16:45.344691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:16.145 [2024-11-05 18:16:45.344703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:16.145 [2024-11-05 18:16:45.344713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:16.145 [2024-11-05 18:16:45.344724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:16.145 [2024-11-05 18:16:45.344838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 511.303 ms, result 0 00:27:17.083 00:27:17.083 00:27:17.083 18:16:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:18.990 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:18.990 18:16:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:18.990 [2024-11-05 18:16:48.039891] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:27:18.990 [2024-11-05 18:16:48.040202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80056 ] 00:27:18.990 [2024-11-05 18:16:48.220858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.250 [2024-11-05 18:16:48.322063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.509 [2024-11-05 18:16:48.638447] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:19.509 [2024-11-05 18:16:48.638538] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:19.509 [2024-11-05 18:16:48.799913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.509 [2024-11-05 18:16:48.799964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:19.509 [2024-11-05 18:16:48.799983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:19.509 [2024-11-05 18:16:48.800009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.509 [2024-11-05 18:16:48.800054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.509 [2024-11-05 18:16:48.800067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:19.509 [2024-11-05 18:16:48.800081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:19.509 [2024-11-05 18:16:48.800090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.509 [2024-11-05 18:16:48.800110] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:19.509 [2024-11-05 18:16:48.801216] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:19.509 [2024-11-05 18:16:48.801436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.509 [2024-11-05 18:16:48.801520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:19.509 [2024-11-05 18:16:48.801559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.330 ms 00:27:19.509 [2024-11-05 18:16:48.801707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.509 [2024-11-05 18:16:48.803300] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:19.509 [2024-11-05 18:16:48.822058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.509 [2024-11-05 18:16:48.822098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:19.509 [2024-11-05 18:16:48.822113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.789 ms 00:27:19.509 [2024-11-05 18:16:48.822123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.509 [2024-11-05 18:16:48.822185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.509 [2024-11-05 18:16:48.822198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:19.509 [2024-11-05 18:16:48.822209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:19.509 [2024-11-05 18:16:48.822219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.509 [2024-11-05 18:16:48.829102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.509 [2024-11-05 18:16:48.829128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:19.509 [2024-11-05 18:16:48.829139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.824 ms 00:27:19.509 [2024-11-05 18:16:48.829149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.509 [2024-11-05 18:16:48.829245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.509 [2024-11-05 18:16:48.829259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:19.509 [2024-11-05 18:16:48.829270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:27:19.509 [2024-11-05 18:16:48.829281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.509 [2024-11-05 18:16:48.829320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.509 [2024-11-05 18:16:48.829332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:19.509 [2024-11-05 18:16:48.829343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:19.509 [2024-11-05 18:16:48.829352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.509 [2024-11-05 18:16:48.829377] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:19.770 [2024-11-05 18:16:48.834110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.770 [2024-11-05 18:16:48.834144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:19.770 [2024-11-05 18:16:48.834156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.747 ms 00:27:19.770 [2024-11-05 18:16:48.834185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.770 [2024-11-05 18:16:48.834214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.770 [2024-11-05 18:16:48.834225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:19.770 [2024-11-05 18:16:48.834235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:19.770 [2024-11-05 18:16:48.834245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.770 [2024-11-05 18:16:48.834296] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:19.770 [2024-11-05 18:16:48.834319] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:19.770 [2024-11-05 18:16:48.834353] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:19.770 [2024-11-05 18:16:48.834374] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:19.770 [2024-11-05 18:16:48.834479] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:19.770 [2024-11-05 18:16:48.834494] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:19.770 [2024-11-05 18:16:48.834507] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:19.770 [2024-11-05 18:16:48.834520] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:19.770 [2024-11-05 18:16:48.834532] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:19.770 [2024-11-05 18:16:48.834543] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:19.770 [2024-11-05 18:16:48.834554] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:19.770 [2024-11-05 18:16:48.834583] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:19.770 [2024-11-05 18:16:48.834593] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:19.770 [2024-11-05 18:16:48.834608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.770 [2024-11-05 18:16:48.834618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:19.770 [2024-11-05 18:16:48.834629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:27:19.770 [2024-11-05 18:16:48.834639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.770 [2024-11-05 18:16:48.834710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.770 [2024-11-05 18:16:48.834720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:19.770 [2024-11-05 18:16:48.834730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:19.770 [2024-11-05 18:16:48.834740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.770 [2024-11-05 18:16:48.834834] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:19.770 [2024-11-05 18:16:48.834857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:19.770 [2024-11-05 18:16:48.834869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:19.770 [2024-11-05 18:16:48.834879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.770 [2024-11-05 18:16:48.834890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:19.770 [2024-11-05 18:16:48.834899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:19.770 [2024-11-05 18:16:48.834908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:19.770 [2024-11-05 18:16:48.834918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:19.770 [2024-11-05 18:16:48.834927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:19.770 [2024-11-05 18:16:48.834936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:19.770 [2024-11-05 18:16:48.834946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:19.770 [2024-11-05 18:16:48.834955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:19.770 [2024-11-05 18:16:48.834965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:19.770 [2024-11-05 18:16:48.834975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:19.770 [2024-11-05 18:16:48.834984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:19.770 [2024-11-05 18:16:48.835002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.770 [2024-11-05 18:16:48.835011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:19.770 [2024-11-05 18:16:48.835020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:19.770 [2024-11-05 18:16:48.835030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.770 [2024-11-05 18:16:48.835039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:19.770 [2024-11-05 18:16:48.835048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:19.770 [2024-11-05 18:16:48.835058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:19.770 [2024-11-05 18:16:48.835067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:19.770 [2024-11-05 18:16:48.835076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:19.770 [2024-11-05 18:16:48.835086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:19.770 [2024-11-05 18:16:48.835094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:19.770 [2024-11-05 18:16:48.835104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:19.771 [2024-11-05 18:16:48.835113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:19.771 [2024-11-05 18:16:48.835122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:19.771 [2024-11-05 18:16:48.835131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:19.771 [2024-11-05 18:16:48.835140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:19.771 [2024-11-05 18:16:48.835149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:19.771 [2024-11-05 18:16:48.835158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:19.771 [2024-11-05 18:16:48.835167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:19.771 [2024-11-05 18:16:48.835176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:19.771 [2024-11-05 18:16:48.835185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:19.771 [2024-11-05 18:16:48.835194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:19.771 [2024-11-05 18:16:48.835203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:19.771 [2024-11-05 18:16:48.835213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:19.771 [2024-11-05 18:16:48.835221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.771 [2024-11-05 18:16:48.835230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:19.771 [2024-11-05 18:16:48.835239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:19.771 [2024-11-05 18:16:48.835248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.771 [2024-11-05 18:16:48.835259] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:19.771 [2024-11-05 18:16:48.835269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:19.771 [2024-11-05 18:16:48.835279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:19.771 [2024-11-05 18:16:48.835288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:19.771 [2024-11-05 18:16:48.835298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:19.771 [2024-11-05 18:16:48.835307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:19.771 [2024-11-05 18:16:48.835317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:19.771 [2024-11-05 18:16:48.835326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:19.771 [2024-11-05 18:16:48.835335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:19.771 [2024-11-05 18:16:48.835345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:19.771 [2024-11-05 18:16:48.835355] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:19.771 [2024-11-05 18:16:48.835367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:19.771 [2024-11-05 18:16:48.835379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:19.771 [2024-11-05 18:16:48.835389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:19.771 [2024-11-05 18:16:48.835399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:19.771 [2024-11-05 18:16:48.835420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:19.771 [2024-11-05 18:16:48.835431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:19.771 [2024-11-05 18:16:48.835442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:19.771 [2024-11-05 18:16:48.835453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:19.771 [2024-11-05 18:16:48.835463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:19.771 [2024-11-05 18:16:48.835473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:19.771 [2024-11-05 18:16:48.835484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:19.771 [2024-11-05 18:16:48.835494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:19.771 [2024-11-05 18:16:48.835504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:19.771 [2024-11-05 18:16:48.835515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:19.771 [2024-11-05 18:16:48.835525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:19.771 [2024-11-05 18:16:48.835535] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:19.771 [2024-11-05 18:16:48.835550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:19.771 [2024-11-05 18:16:48.835570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:19.771 [2024-11-05 18:16:48.835581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:19.771 [2024-11-05 18:16:48.835591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:19.771 [2024-11-05 18:16:48.835603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:19.771 [2024-11-05 18:16:48.835615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.771 [2024-11-05 18:16:48.835625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:19.771 [2024-11-05 18:16:48.835636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:27:19.771 [2024-11-05 18:16:48.835645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.771 [2024-11-05 18:16:48.875449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.771 [2024-11-05 18:16:48.875484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:19.771 [2024-11-05 18:16:48.875498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.821 ms 00:27:19.771 [2024-11-05 18:16:48.875523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.771 [2024-11-05 18:16:48.875603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.771 [2024-11-05 18:16:48.875614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:19.771 [2024-11-05 18:16:48.875625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:19.771 [2024-11-05 18:16:48.875634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.771 [2024-11-05 18:16:48.950324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.771 [2024-11-05 18:16:48.950364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:19.771 [2024-11-05 18:16:48.950380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.755 ms 00:27:19.771 [2024-11-05 18:16:48.950391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.771 [2024-11-05 18:16:48.950444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.771 [2024-11-05 18:16:48.950457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:19.771 [2024-11-05 18:16:48.950468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:19.771 [2024-11-05 18:16:48.950483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.771 [2024-11-05 18:16:48.950970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.771 [2024-11-05 18:16:48.950984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:19.771 [2024-11-05 18:16:48.950996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:27:19.771 [2024-11-05 18:16:48.951006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.771 [2024-11-05 18:16:48.951119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.771 [2024-11-05 18:16:48.951132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:19.771 [2024-11-05 18:16:48.951144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:27:19.771 [2024-11-05 18:16:48.951160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.771 [2024-11-05 18:16:48.971621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.771 [2024-11-05 18:16:48.971657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:19.771 [2024-11-05 18:16:48.971674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.472 ms 00:27:19.771 [2024-11-05 18:16:48.971692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.771 [2024-11-05 18:16:48.989716] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:19.771 [2024-11-05 18:16:48.989773] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:19.771 [2024-11-05 18:16:48.989788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.771 [2024-11-05 18:16:48.989813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:19.771 [2024-11-05 18:16:48.989825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.029 ms 00:27:19.771 [2024-11-05 18:16:48.989835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.771 [2024-11-05 18:16:49.017899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.771 [2024-11-05 18:16:49.017943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:19.771 [2024-11-05 18:16:49.017957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.067 ms 00:27:19.771 [2024-11-05 18:16:49.017967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.771 [2024-11-05 18:16:49.034825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.771 [2024-11-05 18:16:49.034865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:19.771 [2024-11-05 18:16:49.034878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.831 ms 00:27:19.771 [2024-11-05 18:16:49.034903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.771 [2024-11-05 18:16:49.051976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.771 [2024-11-05 18:16:49.052011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:19.771 [2024-11-05 18:16:49.052023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.064 ms 00:27:19.771 [2024-11-05 18:16:49.052032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.771 [2024-11-05 18:16:49.052763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.771 [2024-11-05 18:16:49.052783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:19.771 [2024-11-05 18:16:49.052794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:27:19.771 [2024-11-05 18:16:49.052808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.031 [2024-11-05 18:16:49.133157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.031 [2024-11-05 18:16:49.133217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:20.031 [2024-11-05 18:16:49.133239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.458 ms 00:27:20.031 [2024-11-05 18:16:49.133249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.031 [2024-11-05 18:16:49.143665] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:20.031 [2024-11-05 18:16:49.145917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.031 [2024-11-05 18:16:49.146052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:20.031 [2024-11-05 18:16:49.146088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.646 ms 00:27:20.031 [2024-11-05 18:16:49.146099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.031 [2024-11-05 18:16:49.146177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.031 [2024-11-05 18:16:49.146191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:20.031 [2024-11-05 18:16:49.146202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:20.031 [2024-11-05 18:16:49.146215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.031 [2024-11-05 18:16:49.147087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.031 [2024-11-05 18:16:49.147118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:20.031 [2024-11-05 18:16:49.147130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:27:20.031 [2024-11-05 18:16:49.147140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.031 [2024-11-05 18:16:49.147163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.031 [2024-11-05 18:16:49.147175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:20.031 [2024-11-05 18:16:49.147185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:20.031 [2024-11-05 18:16:49.147196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.031 [2024-11-05 18:16:49.147241] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:20.031 [2024-11-05 18:16:49.147257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.031 [2024-11-05 18:16:49.147267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:20.031 [2024-11-05 18:16:49.147278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:20.031 [2024-11-05 18:16:49.147287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.031 [2024-11-05 18:16:49.181977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.031 [2024-11-05 18:16:49.182013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:20.031 [2024-11-05 18:16:49.182026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.727 ms 00:27:20.031 [2024-11-05 18:16:49.182041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.031 [2024-11-05 18:16:49.182111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.031 [2024-11-05 18:16:49.182123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:20.031 [2024-11-05 18:16:49.182133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:20.031 [2024-11-05 18:16:49.182142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.031 [2024-11-05 18:16:49.183218] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.507 ms, result 0 00:27:21.411  [2024-11-05T18:16:51.672Z] Copying: 25/1024 [MB] (25 MBps) [2024-11-05T18:16:52.610Z] Copying: 50/1024 [MB] (24 MBps) [2024-11-05T18:16:53.548Z] Copying: 75/1024 [MB] (25 MBps) [2024-11-05T18:16:54.488Z] Copying: 99/1024 [MB] (24 MBps) [2024-11-05T18:16:55.428Z] Copying: 124/1024 [MB] (24 MBps) [2024-11-05T18:16:56.807Z] Copying: 149/1024 [MB] (24 MBps) [2024-11-05T18:16:57.376Z] Copying: 174/1024 [MB] (24 MBps) [2024-11-05T18:16:58.755Z] Copying: 198/1024 [MB] (24 MBps) [2024-11-05T18:16:59.693Z] Copying: 223/1024 [MB] (24 MBps) [2024-11-05T18:17:00.631Z] Copying: 247/1024 [MB] (24 MBps) [2024-11-05T18:17:01.567Z] Copying: 272/1024 [MB] (24 MBps) [2024-11-05T18:17:02.505Z] Copying: 296/1024 [MB] (24 MBps) [2024-11-05T18:17:03.443Z] Copying: 321/1024 [MB] (24 MBps) [2024-11-05T18:17:04.381Z] Copying: 346/1024 [MB] (25 MBps) [2024-11-05T18:17:05.760Z] Copying: 371/1024 [MB] (25 MBps) [2024-11-05T18:17:06.698Z] Copying: 396/1024 [MB] (25 MBps) [2024-11-05T18:17:07.635Z] Copying: 421/1024 [MB] (25 MBps) [2024-11-05T18:17:08.572Z] Copying: 446/1024 [MB] (24 MBps) [2024-11-05T18:17:09.512Z] Copying: 471/1024 [MB] (25 MBps) [2024-11-05T18:17:10.449Z] Copying: 496/1024 [MB] (24 MBps) [2024-11-05T18:17:11.388Z] Copying: 521/1024 [MB] (25 MBps) [2024-11-05T18:17:12.767Z] Copying: 545/1024 [MB] (24 MBps) [2024-11-05T18:17:13.704Z] Copying: 570/1024 [MB] (25 MBps) [2024-11-05T18:17:14.642Z] Copying: 596/1024 [MB] (25 MBps) [2024-11-05T18:17:15.579Z] Copying: 620/1024 [MB] (24 MBps) [2024-11-05T18:17:16.516Z] Copying: 645/1024 [MB] (24 MBps) [2024-11-05T18:17:17.453Z] Copying: 668/1024 [MB] (23 MBps) [2024-11-05T18:17:18.390Z] Copying: 692/1024 [MB] (23 MBps) [2024-11-05T18:17:19.768Z] Copying: 716/1024 [MB] (24 MBps) [2024-11-05T18:17:20.336Z] Copying: 740/1024 [MB] (24 MBps) [2024-11-05T18:17:21.715Z] Copying: 765/1024 [MB] (24 MBps) [2024-11-05T18:17:22.652Z] Copying: 790/1024 [MB] (24 MBps) [2024-11-05T18:17:23.625Z] Copying: 814/1024 [MB] (24 MBps) [2024-11-05T18:17:24.562Z] Copying: 840/1024 [MB] (25 MBps) [2024-11-05T18:17:25.501Z] Copying: 865/1024 [MB] (25 MBps) [2024-11-05T18:17:26.438Z] Copying: 890/1024 [MB] (25 MBps) [2024-11-05T18:17:27.403Z] Copying: 916/1024 [MB] (25 MBps) [2024-11-05T18:17:28.342Z] Copying: 941/1024 [MB] (24 MBps) [2024-11-05T18:17:29.733Z] Copying: 966/1024 [MB] (25 MBps) [2024-11-05T18:17:30.677Z] Copying: 991/1024 [MB] (25 MBps) [2024-11-05T18:17:30.677Z] Copying: 1017/1024 [MB] (25 MBps) [2024-11-05T18:17:30.677Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-05 18:17:30.659996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.354 [2024-11-05 18:17:30.660081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:01.354 [2024-11-05 18:17:30.660112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:01.354 [2024-11-05 18:17:30.660133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.354 [2024-11-05 18:17:30.660174] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:01.354 [2024-11-05 18:17:30.669041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.354 [2024-11-05 18:17:30.669219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:01.354 [2024-11-05 18:17:30.669338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.849 ms 00:28:01.354 [2024-11-05 18:17:30.669361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.354 [2024-11-05 18:17:30.669632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.354 [2024-11-05 18:17:30.669652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:01.354 [2024-11-05 18:17:30.669666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:28:01.354 [2024-11-05 18:17:30.669680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.354 [2024-11-05 18:17:30.673275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.354 [2024-11-05 18:17:30.673306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:01.354 [2024-11-05 18:17:30.673321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:28:01.354 [2024-11-05 18:17:30.673334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.615 [2024-11-05 18:17:30.679464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.615 [2024-11-05 18:17:30.679499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:01.615 [2024-11-05 18:17:30.679511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.087 ms 00:28:01.615 [2024-11-05 18:17:30.679520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.615 [2024-11-05 18:17:30.715805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.615 [2024-11-05 18:17:30.715842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:01.615 [2024-11-05 18:17:30.715856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.282 ms 00:28:01.615 [2024-11-05 18:17:30.715866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.615 [2024-11-05 18:17:30.735999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.615 [2024-11-05 18:17:30.736037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:01.615 [2024-11-05 18:17:30.736049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.127 ms 00:28:01.615 [2024-11-05 18:17:30.736076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.615 [2024-11-05 18:17:30.738146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.615 [2024-11-05 18:17:30.738309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:01.615 [2024-11-05 18:17:30.738329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.033 ms 00:28:01.615 [2024-11-05 18:17:30.738340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.615 [2024-11-05 18:17:30.773935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.615 [2024-11-05 18:17:30.774089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:01.615 [2024-11-05 18:17:30.774126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.629 ms 00:28:01.615 [2024-11-05 18:17:30.774136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.615 [2024-11-05 18:17:30.809039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.615 [2024-11-05 18:17:30.809218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:01.615 [2024-11-05 18:17:30.809238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.924 ms 00:28:01.615 [2024-11-05 18:17:30.809249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.615 [2024-11-05 18:17:30.842848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.615 [2024-11-05 18:17:30.842975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:01.615 [2024-11-05 18:17:30.842995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.618 ms 00:28:01.615 [2024-11-05 18:17:30.843020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.615 [2024-11-05 18:17:30.876171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.615 [2024-11-05 18:17:30.876204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:01.615 [2024-11-05 18:17:30.876216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.107 ms 00:28:01.615 [2024-11-05 18:17:30.876225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.615 [2024-11-05 18:17:30.876260] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:01.615 [2024-11-05 18:17:30.876276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:01.615 [2024-11-05 18:17:30.876294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:01.615 [2024-11-05 18:17:30.876304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:01.615 [2024-11-05 18:17:30.876772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.876990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:01.616 [2024-11-05 18:17:30.877339] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:01.616 [2024-11-05 18:17:30.877353] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 15ee1f9e-64f6-4c91-a0d6-bdf6d43b3bc5 00:28:01.616 [2024-11-05 18:17:30.877365] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:01.616 [2024-11-05 18:17:30.877375] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:01.616 [2024-11-05 18:17:30.877385] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:01.616 [2024-11-05 18:17:30.877394] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:01.616 [2024-11-05 18:17:30.877404] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:01.616 [2024-11-05 18:17:30.877414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:01.616 [2024-11-05 18:17:30.877444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:01.616 [2024-11-05 18:17:30.877453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:01.616 [2024-11-05 18:17:30.877463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:01.616 [2024-11-05 18:17:30.877472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.616 [2024-11-05 18:17:30.877489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:01.616 [2024-11-05 18:17:30.877499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:28:01.616 [2024-11-05 18:17:30.877509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.616 [2024-11-05 18:17:30.896428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.616 [2024-11-05 18:17:30.896454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:01.616 [2024-11-05 18:17:30.896466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.911 ms 00:28:01.616 [2024-11-05 18:17:30.896476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.616 [2024-11-05 18:17:30.896921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.616 [2024-11-05 18:17:30.896935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:01.616 [2024-11-05 18:17:30.896951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:28:01.616 [2024-11-05 18:17:30.896960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.876 [2024-11-05 18:17:30.945543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.876 [2024-11-05 18:17:30.945577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:01.876 [2024-11-05 18:17:30.945588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.876 [2024-11-05 18:17:30.945599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.876 [2024-11-05 18:17:30.945651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.876 [2024-11-05 18:17:30.945662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:01.876 [2024-11-05 18:17:30.945676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.876 [2024-11-05 18:17:30.945685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.876 [2024-11-05 18:17:30.945750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.876 [2024-11-05 18:17:30.945763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:01.876 [2024-11-05 18:17:30.945773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.876 [2024-11-05 18:17:30.945782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.876 [2024-11-05 18:17:30.945798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.876 [2024-11-05 18:17:30.945807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:01.876 [2024-11-05 18:17:30.945817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.876 [2024-11-05 18:17:30.945830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.876 [2024-11-05 18:17:31.060687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.876 [2024-11-05 18:17:31.060736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:01.876 [2024-11-05 18:17:31.060749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.876 [2024-11-05 18:17:31.060759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.876 [2024-11-05 18:17:31.155322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.876 [2024-11-05 18:17:31.155367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:01.876 [2024-11-05 18:17:31.155380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.876 [2024-11-05 18:17:31.155395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.876 [2024-11-05 18:17:31.155483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.876 [2024-11-05 18:17:31.155495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:01.876 [2024-11-05 18:17:31.155506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.876 [2024-11-05 18:17:31.155516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.876 [2024-11-05 18:17:31.155551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.876 [2024-11-05 18:17:31.155562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:01.876 [2024-11-05 18:17:31.155573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.876 [2024-11-05 18:17:31.155582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.876 [2024-11-05 18:17:31.155684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.876 [2024-11-05 18:17:31.155697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:01.876 [2024-11-05 18:17:31.155707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.876 [2024-11-05 18:17:31.155717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.876 [2024-11-05 18:17:31.155752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.876 [2024-11-05 18:17:31.155764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:01.876 [2024-11-05 18:17:31.155774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.876 [2024-11-05 18:17:31.155783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.876 [2024-11-05 18:17:31.155821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.876 [2024-11-05 18:17:31.155832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:01.876 [2024-11-05 18:17:31.155842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.876 [2024-11-05 18:17:31.155851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.876 [2024-11-05 18:17:31.155905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.876 [2024-11-05 18:17:31.155916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:01.876 [2024-11-05 18:17:31.155926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.876 [2024-11-05 18:17:31.155936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.876 [2024-11-05 18:17:31.156049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 496.847 ms, result 0 00:28:02.814 00:28:02.814 00:28:03.074 18:17:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:04.980 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:04.980 18:17:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:04.980 18:17:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:04.980 18:17:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:04.980 18:17:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:04.980 18:17:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:04.980 18:17:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:04.980 18:17:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:04.980 Process with pid 78211 is not found 00:28:04.980 18:17:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78211 00:28:04.980 18:17:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 78211 ']' 00:28:04.980 18:17:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 78211 00:28:04.980 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (78211) - No such process 00:28:04.980 18:17:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 78211 is not found' 00:28:04.980 18:17:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:05.239 Remove shared memory files 00:28:05.239 18:17:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:05.239 18:17:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:05.239 18:17:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:05.239 18:17:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:05.239 18:17:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:05.239 18:17:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:05.239 18:17:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:05.239 ************************************ 00:28:05.239 END TEST ftl_dirty_shutdown 00:28:05.239 ************************************ 00:28:05.239 00:28:05.239 real 3m41.503s 00:28:05.239 user 4m9.648s 00:28:05.239 sys 0m39.180s 00:28:05.239 18:17:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:05.239 18:17:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:05.239 18:17:34 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:05.239 18:17:34 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:28:05.239 18:17:34 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:05.239 18:17:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:05.239 ************************************ 00:28:05.239 START TEST ftl_upgrade_shutdown 00:28:05.239 ************************************ 00:28:05.239 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:05.239 * Looking for test storage... 00:28:05.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:05.239 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:05.239 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:05.239 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:05.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.499 --rc genhtml_branch_coverage=1 00:28:05.499 --rc genhtml_function_coverage=1 00:28:05.499 --rc genhtml_legend=1 00:28:05.499 --rc geninfo_all_blocks=1 00:28:05.499 --rc geninfo_unexecuted_blocks=1 00:28:05.499 00:28:05.499 ' 00:28:05.499 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:05.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.500 --rc genhtml_branch_coverage=1 00:28:05.500 --rc genhtml_function_coverage=1 00:28:05.500 --rc genhtml_legend=1 00:28:05.500 --rc geninfo_all_blocks=1 00:28:05.500 --rc geninfo_unexecuted_blocks=1 00:28:05.500 00:28:05.500 ' 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:05.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.500 --rc genhtml_branch_coverage=1 00:28:05.500 --rc genhtml_function_coverage=1 00:28:05.500 --rc genhtml_legend=1 00:28:05.500 --rc geninfo_all_blocks=1 00:28:05.500 --rc geninfo_unexecuted_blocks=1 00:28:05.500 00:28:05.500 ' 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:05.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.500 --rc genhtml_branch_coverage=1 00:28:05.500 --rc genhtml_function_coverage=1 00:28:05.500 --rc genhtml_legend=1 00:28:05.500 --rc geninfo_all_blocks=1 00:28:05.500 --rc geninfo_unexecuted_blocks=1 00:28:05.500 00:28:05.500 ' 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80606 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80606 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80606 ']' 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:05.500 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:05.500 [2024-11-05 18:17:34.771095] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:28:05.500 [2024-11-05 18:17:34.771211] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80606 ] 00:28:05.760 [2024-11-05 18:17:34.948291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.760 [2024-11-05 18:17:35.047125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:06.698 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:06.958 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:06.958 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:06.958 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:06.958 18:17:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:28:06.958 18:17:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:28:06.958 18:17:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:28:06.958 18:17:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:28:06.958 18:17:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:07.217 18:17:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:28:07.217 { 00:28:07.217 "name": "basen1", 00:28:07.217 "aliases": [ 00:28:07.217 "c4acb842-bb10-4735-b4b3-349d5bba28a6" 00:28:07.217 ], 00:28:07.217 "product_name": "NVMe disk", 00:28:07.217 "block_size": 4096, 00:28:07.217 "num_blocks": 1310720, 00:28:07.217 "uuid": "c4acb842-bb10-4735-b4b3-349d5bba28a6", 00:28:07.217 "numa_id": -1, 00:28:07.217 "assigned_rate_limits": { 00:28:07.217 "rw_ios_per_sec": 0, 00:28:07.217 "rw_mbytes_per_sec": 0, 00:28:07.217 "r_mbytes_per_sec": 0, 00:28:07.217 "w_mbytes_per_sec": 0 00:28:07.217 }, 00:28:07.217 "claimed": true, 00:28:07.217 "claim_type": "read_many_write_one", 00:28:07.217 "zoned": false, 00:28:07.217 "supported_io_types": { 00:28:07.217 "read": true, 00:28:07.217 "write": true, 00:28:07.217 "unmap": true, 00:28:07.217 "flush": true, 00:28:07.217 "reset": true, 00:28:07.217 "nvme_admin": true, 00:28:07.217 "nvme_io": true, 00:28:07.217 "nvme_io_md": false, 00:28:07.217 "write_zeroes": true, 00:28:07.217 "zcopy": false, 00:28:07.217 "get_zone_info": false, 00:28:07.217 "zone_management": false, 00:28:07.217 "zone_append": false, 00:28:07.217 "compare": true, 00:28:07.217 "compare_and_write": false, 00:28:07.217 "abort": true, 00:28:07.217 "seek_hole": false, 00:28:07.217 "seek_data": false, 00:28:07.217 "copy": true, 00:28:07.217 "nvme_iov_md": false 00:28:07.217 }, 00:28:07.217 "driver_specific": { 00:28:07.217 "nvme": [ 00:28:07.217 { 00:28:07.217 "pci_address": "0000:00:11.0", 00:28:07.217 "trid": { 00:28:07.218 "trtype": "PCIe", 00:28:07.218 "traddr": "0000:00:11.0" 00:28:07.218 }, 00:28:07.218 "ctrlr_data": { 00:28:07.218 "cntlid": 0, 00:28:07.218 "vendor_id": "0x1b36", 00:28:07.218 "model_number": "QEMU NVMe Ctrl", 00:28:07.218 "serial_number": "12341", 00:28:07.218 "firmware_revision": "8.0.0", 00:28:07.218 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:07.218 "oacs": { 00:28:07.218 "security": 0, 00:28:07.218 "format": 1, 00:28:07.218 "firmware": 0, 00:28:07.218 "ns_manage": 1 00:28:07.218 }, 00:28:07.218 "multi_ctrlr": false, 00:28:07.218 "ana_reporting": false 00:28:07.218 }, 00:28:07.218 "vs": { 00:28:07.218 "nvme_version": "1.4" 00:28:07.218 }, 00:28:07.218 "ns_data": { 00:28:07.218 "id": 1, 00:28:07.218 "can_share": false 00:28:07.218 } 00:28:07.218 } 00:28:07.218 ], 00:28:07.218 "mp_policy": "active_passive" 00:28:07.218 } 00:28:07.218 } 00:28:07.218 ]' 00:28:07.218 18:17:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:28:07.218 18:17:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:28:07.218 18:17:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:28:07.218 18:17:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:28:07.218 18:17:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:28:07.218 18:17:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:28:07.218 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:07.218 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:07.218 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:07.218 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:07.218 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:07.479 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=b7d51c16-b956-4ff7-bb02-b16cb0607592 00:28:07.479 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:07.479 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b7d51c16-b956-4ff7-bb02-b16cb0607592 00:28:07.748 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:07.748 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=1960a416-1ce4-45b5-9b5e-c6cd51afb6f6 00:28:07.748 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 1960a416-1ce4-45b5-9b5e-c6cd51afb6f6 00:28:08.008 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=bc7a0028-a53d-4b0a-a330-fe1c94244acd 00:28:08.008 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z bc7a0028-a53d-4b0a-a330-fe1c94244acd ]] 00:28:08.008 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 bc7a0028-a53d-4b0a-a330-fe1c94244acd 5120 00:28:08.008 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:08.008 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:08.008 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=bc7a0028-a53d-4b0a-a330-fe1c94244acd 00:28:08.008 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:08.008 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size bc7a0028-a53d-4b0a-a330-fe1c94244acd 00:28:08.008 18:17:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=bc7a0028-a53d-4b0a-a330-fe1c94244acd 00:28:08.008 18:17:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:28:08.008 18:17:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:28:08.008 18:17:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:28:08.008 18:17:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc7a0028-a53d-4b0a-a330-fe1c94244acd 00:28:08.268 18:17:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:28:08.268 { 00:28:08.268 "name": "bc7a0028-a53d-4b0a-a330-fe1c94244acd", 00:28:08.268 "aliases": [ 00:28:08.268 "lvs/basen1p0" 00:28:08.268 ], 00:28:08.268 "product_name": "Logical Volume", 00:28:08.268 "block_size": 4096, 00:28:08.268 "num_blocks": 5242880, 00:28:08.268 "uuid": "bc7a0028-a53d-4b0a-a330-fe1c94244acd", 00:28:08.268 "assigned_rate_limits": { 00:28:08.268 "rw_ios_per_sec": 0, 00:28:08.268 "rw_mbytes_per_sec": 0, 00:28:08.268 "r_mbytes_per_sec": 0, 00:28:08.268 "w_mbytes_per_sec": 0 00:28:08.268 }, 00:28:08.268 "claimed": false, 00:28:08.268 "zoned": false, 00:28:08.268 "supported_io_types": { 00:28:08.268 "read": true, 00:28:08.268 "write": true, 00:28:08.268 "unmap": true, 00:28:08.268 "flush": false, 00:28:08.268 "reset": true, 00:28:08.268 "nvme_admin": false, 00:28:08.268 "nvme_io": false, 00:28:08.268 "nvme_io_md": false, 00:28:08.268 "write_zeroes": true, 00:28:08.268 "zcopy": false, 00:28:08.268 "get_zone_info": false, 00:28:08.268 "zone_management": false, 00:28:08.268 "zone_append": false, 00:28:08.268 "compare": false, 00:28:08.268 "compare_and_write": false, 00:28:08.268 "abort": false, 00:28:08.268 "seek_hole": true, 00:28:08.268 "seek_data": true, 00:28:08.268 "copy": false, 00:28:08.268 "nvme_iov_md": false 00:28:08.268 }, 00:28:08.268 "driver_specific": { 00:28:08.268 "lvol": { 00:28:08.268 "lvol_store_uuid": "1960a416-1ce4-45b5-9b5e-c6cd51afb6f6", 00:28:08.268 "base_bdev": "basen1", 00:28:08.268 "thin_provision": true, 00:28:08.268 "num_allocated_clusters": 0, 00:28:08.268 "snapshot": false, 00:28:08.268 "clone": false, 00:28:08.268 "esnap_clone": false 00:28:08.268 } 00:28:08.268 } 00:28:08.268 } 00:28:08.268 ]' 00:28:08.268 18:17:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:28:08.268 18:17:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:28:08.268 18:17:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:28:08.268 18:17:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:28:08.268 18:17:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:28:08.268 18:17:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:28:08.268 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:08.268 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:08.268 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:08.530 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:08.530 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:08.530 18:17:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:08.789 18:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:08.789 18:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:08.789 18:17:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d bc7a0028-a53d-4b0a-a330-fe1c94244acd -c cachen1p0 --l2p_dram_limit 2 00:28:09.050 [2024-11-05 18:17:38.224603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.050 [2024-11-05 18:17:38.224649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:09.050 [2024-11-05 18:17:38.224666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:09.050 [2024-11-05 18:17:38.224676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.050 [2024-11-05 18:17:38.224742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.050 [2024-11-05 18:17:38.224753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:09.050 [2024-11-05 18:17:38.224766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:09.050 [2024-11-05 18:17:38.224776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.050 [2024-11-05 18:17:38.224798] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:09.050 [2024-11-05 18:17:38.225713] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:09.050 [2024-11-05 18:17:38.225757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.050 [2024-11-05 18:17:38.225767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:09.050 [2024-11-05 18:17:38.225781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.961 ms 00:28:09.050 [2024-11-05 18:17:38.225791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.050 [2024-11-05 18:17:38.225982] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID d904b2d8-2e4f-4554-a6b3-d28ff20c020c 00:28:09.050 [2024-11-05 18:17:38.227421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.050 [2024-11-05 18:17:38.227457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:09.050 [2024-11-05 18:17:38.227469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:09.050 [2024-11-05 18:17:38.227482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.050 [2024-11-05 18:17:38.235024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.050 [2024-11-05 18:17:38.235060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:09.050 [2024-11-05 18:17:38.235076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.503 ms 00:28:09.050 [2024-11-05 18:17:38.235088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.050 [2024-11-05 18:17:38.235131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.050 [2024-11-05 18:17:38.235146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:09.050 [2024-11-05 18:17:38.235157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:09.050 [2024-11-05 18:17:38.235171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.050 [2024-11-05 18:17:38.235220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.050 [2024-11-05 18:17:38.235235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:09.050 [2024-11-05 18:17:38.235245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:09.050 [2024-11-05 18:17:38.235262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.050 [2024-11-05 18:17:38.235284] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:09.050 [2024-11-05 18:17:38.240297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.050 [2024-11-05 18:17:38.240330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:09.050 [2024-11-05 18:17:38.240345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.023 ms 00:28:09.050 [2024-11-05 18:17:38.240355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.050 [2024-11-05 18:17:38.240399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.050 [2024-11-05 18:17:38.240410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:09.050 [2024-11-05 18:17:38.240431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:09.050 [2024-11-05 18:17:38.240441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.050 [2024-11-05 18:17:38.240485] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:09.050 [2024-11-05 18:17:38.240605] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:09.050 [2024-11-05 18:17:38.240624] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:09.050 [2024-11-05 18:17:38.240637] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:09.050 [2024-11-05 18:17:38.240652] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:09.050 [2024-11-05 18:17:38.240664] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:09.050 [2024-11-05 18:17:38.240677] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:09.050 [2024-11-05 18:17:38.240686] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:09.050 [2024-11-05 18:17:38.240716] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:09.051 [2024-11-05 18:17:38.240726] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:09.051 [2024-11-05 18:17:38.240739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.051 [2024-11-05 18:17:38.240749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:09.051 [2024-11-05 18:17:38.240761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.257 ms 00:28:09.051 [2024-11-05 18:17:38.240772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.051 [2024-11-05 18:17:38.240845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.051 [2024-11-05 18:17:38.240855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:09.051 [2024-11-05 18:17:38.240869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:09.051 [2024-11-05 18:17:38.240889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.051 [2024-11-05 18:17:38.240982] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:09.051 [2024-11-05 18:17:38.240999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:09.051 [2024-11-05 18:17:38.241012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:09.051 [2024-11-05 18:17:38.241023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:09.051 [2024-11-05 18:17:38.241036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:09.051 [2024-11-05 18:17:38.241045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:09.051 [2024-11-05 18:17:38.241057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:09.051 [2024-11-05 18:17:38.241067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:09.051 [2024-11-05 18:17:38.241079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:09.051 [2024-11-05 18:17:38.241089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:09.051 [2024-11-05 18:17:38.241100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:09.051 [2024-11-05 18:17:38.241109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:09.051 [2024-11-05 18:17:38.241121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:09.051 [2024-11-05 18:17:38.241132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:09.051 [2024-11-05 18:17:38.241144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:09.051 [2024-11-05 18:17:38.241153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:09.051 [2024-11-05 18:17:38.241166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:09.051 [2024-11-05 18:17:38.241175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:09.051 [2024-11-05 18:17:38.241188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:09.051 [2024-11-05 18:17:38.241198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:09.051 [2024-11-05 18:17:38.241209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:09.051 [2024-11-05 18:17:38.241219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:09.051 [2024-11-05 18:17:38.241230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:09.051 [2024-11-05 18:17:38.241239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:09.051 [2024-11-05 18:17:38.241251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:09.051 [2024-11-05 18:17:38.241260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:09.051 [2024-11-05 18:17:38.241271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:09.051 [2024-11-05 18:17:38.241280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:09.051 [2024-11-05 18:17:38.241291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:09.051 [2024-11-05 18:17:38.241300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:09.051 [2024-11-05 18:17:38.241312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:09.051 [2024-11-05 18:17:38.241321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:09.051 [2024-11-05 18:17:38.241335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:09.051 [2024-11-05 18:17:38.241344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:09.051 [2024-11-05 18:17:38.241356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:09.051 [2024-11-05 18:17:38.241365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:09.051 [2024-11-05 18:17:38.241376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:09.051 [2024-11-05 18:17:38.241385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:09.051 [2024-11-05 18:17:38.241396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:09.051 [2024-11-05 18:17:38.241405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:09.051 [2024-11-05 18:17:38.241427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:09.051 [2024-11-05 18:17:38.241437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:09.051 [2024-11-05 18:17:38.241449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:09.051 [2024-11-05 18:17:38.241458] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:09.051 [2024-11-05 18:17:38.241470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:09.051 [2024-11-05 18:17:38.241480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:09.051 [2024-11-05 18:17:38.241494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:09.051 [2024-11-05 18:17:38.241505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:09.051 [2024-11-05 18:17:38.241520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:09.051 [2024-11-05 18:17:38.241529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:09.051 [2024-11-05 18:17:38.241541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:09.051 [2024-11-05 18:17:38.241550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:09.051 [2024-11-05 18:17:38.241562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:09.051 [2024-11-05 18:17:38.241575] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:09.051 [2024-11-05 18:17:38.241591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:09.051 [2024-11-05 18:17:38.241605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:09.051 [2024-11-05 18:17:38.241617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:09.051 [2024-11-05 18:17:38.241628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:09.051 [2024-11-05 18:17:38.241640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:09.051 [2024-11-05 18:17:38.241651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:09.051 [2024-11-05 18:17:38.241663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:09.051 [2024-11-05 18:17:38.241674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:09.051 [2024-11-05 18:17:38.241686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:09.051 [2024-11-05 18:17:38.241696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:09.051 [2024-11-05 18:17:38.241712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:09.051 [2024-11-05 18:17:38.241731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:09.051 [2024-11-05 18:17:38.241744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:09.051 [2024-11-05 18:17:38.241754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:09.051 [2024-11-05 18:17:38.241769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:09.051 [2024-11-05 18:17:38.241779] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:09.051 [2024-11-05 18:17:38.241792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:09.051 [2024-11-05 18:17:38.241804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:09.051 [2024-11-05 18:17:38.241816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:09.051 [2024-11-05 18:17:38.241827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:09.051 [2024-11-05 18:17:38.241840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:09.051 [2024-11-05 18:17:38.241851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.051 [2024-11-05 18:17:38.241863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:09.051 [2024-11-05 18:17:38.241874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.927 ms 00:28:09.051 [2024-11-05 18:17:38.241886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.051 [2024-11-05 18:17:38.241926] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:09.051 [2024-11-05 18:17:38.241944] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:13.249 [2024-11-05 18:17:41.956537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:41.956609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:13.249 [2024-11-05 18:17:41.956642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3720.640 ms 00:28:13.249 [2024-11-05 18:17:41.956655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:41.995269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:41.995335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:13.249 [2024-11-05 18:17:41.995351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.326 ms 00:28:13.249 [2024-11-05 18:17:41.995380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:41.995465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:41.995482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:13.249 [2024-11-05 18:17:41.995493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:13.249 [2024-11-05 18:17:41.995509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.041885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.041932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:13.249 [2024-11-05 18:17:42.041946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.409 ms 00:28:13.249 [2024-11-05 18:17:42.041975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.042006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.042024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:13.249 [2024-11-05 18:17:42.042035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:13.249 [2024-11-05 18:17:42.042047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.042565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.042592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:13.249 [2024-11-05 18:17:42.042603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.447 ms 00:28:13.249 [2024-11-05 18:17:42.042615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.042664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.042678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:13.249 [2024-11-05 18:17:42.042691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:13.249 [2024-11-05 18:17:42.042707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.062958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.063000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:13.249 [2024-11-05 18:17:42.063013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.265 ms 00:28:13.249 [2024-11-05 18:17:42.063025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.074997] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:13.249 [2024-11-05 18:17:42.076082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.076111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:13.249 [2024-11-05 18:17:42.076126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.984 ms 00:28:13.249 [2024-11-05 18:17:42.076136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.128908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.128951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:13.249 [2024-11-05 18:17:42.128969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.823 ms 00:28:13.249 [2024-11-05 18:17:42.128996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.129106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.129125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:13.249 [2024-11-05 18:17:42.129141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:28:13.249 [2024-11-05 18:17:42.129151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.162795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.162835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:13.249 [2024-11-05 18:17:42.162851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.644 ms 00:28:13.249 [2024-11-05 18:17:42.162877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.196838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.196874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:13.249 [2024-11-05 18:17:42.196889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.970 ms 00:28:13.249 [2024-11-05 18:17:42.196915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.197623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.197652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:13.249 [2024-11-05 18:17:42.197666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.668 ms 00:28:13.249 [2024-11-05 18:17:42.197677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.298493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.298529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:13.249 [2024-11-05 18:17:42.298548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 100.910 ms 00:28:13.249 [2024-11-05 18:17:42.298574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.334092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.334133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:13.249 [2024-11-05 18:17:42.334174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.491 ms 00:28:13.249 [2024-11-05 18:17:42.334185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.368323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.368356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:13.249 [2024-11-05 18:17:42.368372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.149 ms 00:28:13.249 [2024-11-05 18:17:42.368381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.402447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.249 [2024-11-05 18:17:42.402483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:13.249 [2024-11-05 18:17:42.402498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.054 ms 00:28:13.249 [2024-11-05 18:17:42.402508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.249 [2024-11-05 18:17:42.402568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.250 [2024-11-05 18:17:42.402580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:13.250 [2024-11-05 18:17:42.402596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:13.250 [2024-11-05 18:17:42.402606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.250 [2024-11-05 18:17:42.402703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.250 [2024-11-05 18:17:42.402715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:13.250 [2024-11-05 18:17:42.402732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:13.250 [2024-11-05 18:17:42.402741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.250 [2024-11-05 18:17:42.403709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4185.454 ms, result 0 00:28:13.250 { 00:28:13.250 "name": "ftl", 00:28:13.250 "uuid": "d904b2d8-2e4f-4554-a6b3-d28ff20c020c" 00:28:13.250 } 00:28:13.250 18:17:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:13.510 [2024-11-05 18:17:42.622713] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.510 18:17:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:13.510 18:17:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:13.769 [2024-11-05 18:17:43.006612] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:13.769 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:14.029 [2024-11-05 18:17:43.215875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:14.029 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:14.289 Fill FTL, iteration 1 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80728 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80728 /var/tmp/spdk.tgt.sock 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80728 ']' 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:14.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:14.289 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:14.548 [2024-11-05 18:17:43.664682] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:28:14.548 [2024-11-05 18:17:43.664814] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80728 ] 00:28:14.548 [2024-11-05 18:17:43.845994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.807 [2024-11-05 18:17:43.973326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.743 18:17:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:15.743 18:17:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:15.743 18:17:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:16.002 ftln1 00:28:16.002 18:17:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:16.002 18:17:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:16.260 18:17:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:16.260 18:17:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80728 00:28:16.260 18:17:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80728 ']' 00:28:16.260 18:17:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80728 00:28:16.260 18:17:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:16.261 18:17:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:16.261 18:17:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80728 00:28:16.261 18:17:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:16.261 18:17:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:16.261 killing process with pid 80728 00:28:16.261 18:17:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80728' 00:28:16.261 18:17:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80728 00:28:16.261 18:17:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80728 00:28:18.793 18:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:18.793 18:17:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:18.793 [2024-11-05 18:17:47.885369] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:28:18.793 [2024-11-05 18:17:47.885542] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80787 ] 00:28:18.793 [2024-11-05 18:17:48.067489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.051 [2024-11-05 18:17:48.194632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.426  [2024-11-05T18:17:50.683Z] Copying: 259/1024 [MB] (259 MBps) [2024-11-05T18:17:52.058Z] Copying: 518/1024 [MB] (259 MBps) [2024-11-05T18:17:52.627Z] Copying: 780/1024 [MB] (262 MBps) [2024-11-05T18:17:54.005Z] Copying: 1024/1024 [MB] (average 260 MBps) 00:28:24.682 00:28:24.682 Calculate MD5 checksum, iteration 1 00:28:24.682 18:17:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:24.682 18:17:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:24.682 18:17:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:24.682 18:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:24.682 18:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:24.682 18:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:24.682 18:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:24.682 18:17:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:24.682 [2024-11-05 18:17:53.870747] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:28:24.682 [2024-11-05 18:17:53.870887] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80850 ] 00:28:24.941 [2024-11-05 18:17:54.051399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.941 [2024-11-05 18:17:54.178483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:26.845  [2024-11-05T18:17:56.427Z] Copying: 667/1024 [MB] (667 MBps) [2024-11-05T18:17:57.361Z] Copying: 1024/1024 [MB] (average 656 MBps) 00:28:28.038 00:28:28.038 18:17:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:28.038 18:17:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:29.945 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:29.945 Fill FTL, iteration 2 00:28:29.945 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ed24e0f49df1d4d5b71d954cd18c8164 00:28:29.945 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:29.945 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:29.945 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:29.945 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:29.945 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:29.945 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:29.946 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:29.946 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:29.946 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:29.946 [2024-11-05 18:17:58.957249] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:28:29.946 [2024-11-05 18:17:58.957620] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80906 ] 00:28:29.946 [2024-11-05 18:17:59.140873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.946 [2024-11-05 18:17:59.265751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.851  [2024-11-05T18:18:02.111Z] Copying: 240/1024 [MB] (240 MBps) [2024-11-05T18:18:03.048Z] Copying: 485/1024 [MB] (245 MBps) [2024-11-05T18:18:03.986Z] Copying: 730/1024 [MB] (245 MBps) [2024-11-05T18:18:03.986Z] Copying: 977/1024 [MB] (247 MBps) [2024-11-05T18:18:05.365Z] Copying: 1024/1024 [MB] (average 244 MBps) 00:28:36.042 00:28:36.042 Calculate MD5 checksum, iteration 2 00:28:36.042 18:18:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:36.042 18:18:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:36.042 18:18:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:36.042 18:18:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:36.042 18:18:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:36.042 18:18:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:36.042 18:18:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:36.042 18:18:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:36.042 [2024-11-05 18:18:05.149012] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:28:36.042 [2024-11-05 18:18:05.149336] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80971 ] 00:28:36.042 [2024-11-05 18:18:05.328856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.302 [2024-11-05 18:18:05.439794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.209  [2024-11-05T18:18:07.792Z] Copying: 715/1024 [MB] (715 MBps) [2024-11-05T18:18:09.192Z] Copying: 1024/1024 [MB] (average 691 MBps) 00:28:39.870 00:28:39.870 18:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:39.870 18:18:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:41.248 18:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:41.248 18:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=fed90cc23c5b095bd8e2b8118e88dd81 00:28:41.248 18:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:41.248 18:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:41.248 18:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:41.507 [2024-11-05 18:18:10.717949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.507 [2024-11-05 18:18:10.718002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:41.507 [2024-11-05 18:18:10.718018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:41.507 [2024-11-05 18:18:10.718029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.508 [2024-11-05 18:18:10.718062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.508 [2024-11-05 18:18:10.718073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:41.508 [2024-11-05 18:18:10.718083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:41.508 [2024-11-05 18:18:10.718098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.508 [2024-11-05 18:18:10.718142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:41.508 [2024-11-05 18:18:10.718154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:41.508 [2024-11-05 18:18:10.718165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:41.508 [2024-11-05 18:18:10.718174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:41.508 [2024-11-05 18:18:10.718246] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.289 ms, result 0 00:28:41.508 true 00:28:41.508 18:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:41.767 { 00:28:41.767 "name": "ftl", 00:28:41.767 "properties": [ 00:28:41.767 { 00:28:41.767 "name": "superblock_version", 00:28:41.767 "value": 5, 00:28:41.767 "read-only": true 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "name": "base_device", 00:28:41.767 "bands": [ 00:28:41.767 { 00:28:41.767 "id": 0, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 1, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 2, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 3, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 4, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 5, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 6, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 7, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 8, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 9, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 10, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 11, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 12, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 13, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 14, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 15, 00:28:41.767 "state": "FREE", 00:28:41.767 "validity": 0.0 00:28:41.767 }, 00:28:41.767 { 00:28:41.767 "id": 16, 00:28:41.768 "state": "FREE", 00:28:41.768 "validity": 0.0 00:28:41.768 }, 00:28:41.768 { 00:28:41.768 "id": 17, 00:28:41.768 "state": "FREE", 00:28:41.768 "validity": 0.0 00:28:41.768 } 00:28:41.768 ], 00:28:41.768 "read-only": true 00:28:41.768 }, 00:28:41.768 { 00:28:41.768 "name": "cache_device", 00:28:41.768 "type": "bdev", 00:28:41.768 "chunks": [ 00:28:41.768 { 00:28:41.768 "id": 0, 00:28:41.768 "state": "INACTIVE", 00:28:41.768 "utilization": 0.0 00:28:41.768 }, 00:28:41.768 { 00:28:41.768 "id": 1, 00:28:41.768 "state": "CLOSED", 00:28:41.768 "utilization": 1.0 00:28:41.768 }, 00:28:41.768 { 00:28:41.768 "id": 2, 00:28:41.768 "state": "CLOSED", 00:28:41.768 "utilization": 1.0 00:28:41.768 }, 00:28:41.768 { 00:28:41.768 "id": 3, 00:28:41.768 "state": "OPEN", 00:28:41.768 "utilization": 0.001953125 00:28:41.768 }, 00:28:41.768 { 00:28:41.768 "id": 4, 00:28:41.768 "state": "OPEN", 00:28:41.768 "utilization": 0.0 00:28:41.768 } 00:28:41.768 ], 00:28:41.768 "read-only": true 00:28:41.768 }, 00:28:41.768 { 00:28:41.768 "name": "verbose_mode", 00:28:41.768 "value": true, 00:28:41.768 "unit": "", 00:28:41.768 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:41.768 }, 00:28:41.768 { 00:28:41.768 "name": "prep_upgrade_on_shutdown", 00:28:41.768 "value": false, 00:28:41.768 "unit": "", 00:28:41.768 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:41.768 } 00:28:41.768 ] 00:28:41.768 } 00:28:41.768 18:18:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:42.027 [2024-11-05 18:18:11.149897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.027 [2024-11-05 18:18:11.149947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:42.027 [2024-11-05 18:18:11.149961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:42.027 [2024-11-05 18:18:11.149986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.027 [2024-11-05 18:18:11.150012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.027 [2024-11-05 18:18:11.150023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:42.027 [2024-11-05 18:18:11.150033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:42.027 [2024-11-05 18:18:11.150042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.027 [2024-11-05 18:18:11.150062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.027 [2024-11-05 18:18:11.150072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:42.027 [2024-11-05 18:18:11.150082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:42.027 [2024-11-05 18:18:11.150091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.027 [2024-11-05 18:18:11.150147] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.237 ms, result 0 00:28:42.027 true 00:28:42.027 18:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:42.027 18:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:42.027 18:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:42.287 18:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:42.287 18:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:42.287 18:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:42.287 [2024-11-05 18:18:11.577564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.287 [2024-11-05 18:18:11.577711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:42.287 [2024-11-05 18:18:11.577741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:42.287 [2024-11-05 18:18:11.577752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.287 [2024-11-05 18:18:11.577785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.287 [2024-11-05 18:18:11.577797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:42.287 [2024-11-05 18:18:11.577807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:42.287 [2024-11-05 18:18:11.577816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.287 [2024-11-05 18:18:11.577835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:42.287 [2024-11-05 18:18:11.577845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:42.287 [2024-11-05 18:18:11.577855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:42.287 [2024-11-05 18:18:11.577865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:42.287 [2024-11-05 18:18:11.577920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.341 ms, result 0 00:28:42.287 true 00:28:42.287 18:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:42.547 { 00:28:42.547 "name": "ftl", 00:28:42.547 "properties": [ 00:28:42.547 { 00:28:42.547 "name": "superblock_version", 00:28:42.547 "value": 5, 00:28:42.547 "read-only": true 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "name": "base_device", 00:28:42.547 "bands": [ 00:28:42.547 { 00:28:42.547 "id": 0, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 1, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 2, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 3, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 4, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 5, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 6, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 7, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 8, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 9, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 10, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 11, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 12, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 13, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 14, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 15, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 16, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "id": 17, 00:28:42.547 "state": "FREE", 00:28:42.547 "validity": 0.0 00:28:42.547 } 00:28:42.547 ], 00:28:42.547 "read-only": true 00:28:42.547 }, 00:28:42.547 { 00:28:42.547 "name": "cache_device", 00:28:42.547 "type": "bdev", 00:28:42.547 "chunks": [ 00:28:42.547 { 00:28:42.547 "id": 0, 00:28:42.547 "state": "INACTIVE", 00:28:42.548 "utilization": 0.0 00:28:42.548 }, 00:28:42.548 { 00:28:42.548 "id": 1, 00:28:42.548 "state": "CLOSED", 00:28:42.548 "utilization": 1.0 00:28:42.548 }, 00:28:42.548 { 00:28:42.548 "id": 2, 00:28:42.548 "state": "CLOSED", 00:28:42.548 "utilization": 1.0 00:28:42.548 }, 00:28:42.548 { 00:28:42.548 "id": 3, 00:28:42.548 "state": "OPEN", 00:28:42.548 "utilization": 0.001953125 00:28:42.548 }, 00:28:42.548 { 00:28:42.548 "id": 4, 00:28:42.548 "state": "OPEN", 00:28:42.548 "utilization": 0.0 00:28:42.548 } 00:28:42.548 ], 00:28:42.548 "read-only": true 00:28:42.548 }, 00:28:42.548 { 00:28:42.548 "name": "verbose_mode", 00:28:42.548 "value": true, 00:28:42.548 "unit": "", 00:28:42.548 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:42.548 }, 00:28:42.548 { 00:28:42.548 "name": "prep_upgrade_on_shutdown", 00:28:42.548 "value": true, 00:28:42.548 "unit": "", 00:28:42.548 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:42.548 } 00:28:42.548 ] 00:28:42.548 } 00:28:42.548 18:18:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:42.548 18:18:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80606 ]] 00:28:42.548 18:18:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80606 00:28:42.548 18:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80606 ']' 00:28:42.548 18:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80606 00:28:42.548 18:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:42.548 18:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:42.548 18:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80606 00:28:42.548 killing process with pid 80606 00:28:42.548 18:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:42.548 18:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:42.548 18:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80606' 00:28:42.548 18:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80606 00:28:42.548 18:18:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80606 00:28:43.928 [2024-11-05 18:18:12.893639] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:43.928 [2024-11-05 18:18:12.913901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.928 [2024-11-05 18:18:12.913944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:43.928 [2024-11-05 18:18:12.913961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:43.928 [2024-11-05 18:18:12.913987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:43.928 [2024-11-05 18:18:12.914009] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:43.928 [2024-11-05 18:18:12.918172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:43.928 [2024-11-05 18:18:12.918216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:43.928 [2024-11-05 18:18:12.918229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.154 ms 00:28:43.928 [2024-11-05 18:18:12.918239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.053 [2024-11-05 18:18:20.017777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.053 [2024-11-05 18:18:20.017829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:52.053 [2024-11-05 18:18:20.017845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7111.038 ms 00:28:52.053 [2024-11-05 18:18:20.017875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.053 [2024-11-05 18:18:20.019117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.053 [2024-11-05 18:18:20.019153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:52.053 [2024-11-05 18:18:20.019165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.226 ms 00:28:52.053 [2024-11-05 18:18:20.019176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.053 [2024-11-05 18:18:20.020112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.054 [2024-11-05 18:18:20.020140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:52.054 [2024-11-05 18:18:20.020152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.906 ms 00:28:52.054 [2024-11-05 18:18:20.020162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.034608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.054 [2024-11-05 18:18:20.034774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:52.054 [2024-11-05 18:18:20.034811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.428 ms 00:28:52.054 [2024-11-05 18:18:20.034821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.043728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.054 [2024-11-05 18:18:20.043766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:52.054 [2024-11-05 18:18:20.043779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.836 ms 00:28:52.054 [2024-11-05 18:18:20.043789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.043864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.054 [2024-11-05 18:18:20.043876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:52.054 [2024-11-05 18:18:20.043892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:28:52.054 [2024-11-05 18:18:20.043901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.058220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.054 [2024-11-05 18:18:20.058261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:52.054 [2024-11-05 18:18:20.058274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.325 ms 00:28:52.054 [2024-11-05 18:18:20.058285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.072463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.054 [2024-11-05 18:18:20.072604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:52.054 [2024-11-05 18:18:20.072623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.163 ms 00:28:52.054 [2024-11-05 18:18:20.072649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.088604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.054 [2024-11-05 18:18:20.088749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:52.054 [2024-11-05 18:18:20.088768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.924 ms 00:28:52.054 [2024-11-05 18:18:20.088793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.102924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.054 [2024-11-05 18:18:20.103073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:52.054 [2024-11-05 18:18:20.103092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.058 ms 00:28:52.054 [2024-11-05 18:18:20.103124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.103233] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:52.054 [2024-11-05 18:18:20.103249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:52.054 [2024-11-05 18:18:20.103262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:52.054 [2024-11-05 18:18:20.103285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:52.054 [2024-11-05 18:18:20.103297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:52.054 [2024-11-05 18:18:20.103470] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:52.054 [2024-11-05 18:18:20.103485] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d904b2d8-2e4f-4554-a6b3-d28ff20c020c 00:28:52.054 [2024-11-05 18:18:20.103495] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:52.054 [2024-11-05 18:18:20.103505] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:52.054 [2024-11-05 18:18:20.103514] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:52.054 [2024-11-05 18:18:20.103526] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:52.054 [2024-11-05 18:18:20.103536] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:52.054 [2024-11-05 18:18:20.103552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:52.054 [2024-11-05 18:18:20.103562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:52.054 [2024-11-05 18:18:20.103571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:52.054 [2024-11-05 18:18:20.103580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:52.054 [2024-11-05 18:18:20.103590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.054 [2024-11-05 18:18:20.103605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:52.054 [2024-11-05 18:18:20.103616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.358 ms 00:28:52.054 [2024-11-05 18:18:20.103626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.123335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.054 [2024-11-05 18:18:20.123368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:52.054 [2024-11-05 18:18:20.123380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.710 ms 00:28:52.054 [2024-11-05 18:18:20.123411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.123979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:52.054 [2024-11-05 18:18:20.123996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:52.054 [2024-11-05 18:18:20.124007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.535 ms 00:28:52.054 [2024-11-05 18:18:20.124016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.188234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:52.054 [2024-11-05 18:18:20.188273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:52.054 [2024-11-05 18:18:20.188292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:52.054 [2024-11-05 18:18:20.188303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.188335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:52.054 [2024-11-05 18:18:20.188346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:52.054 [2024-11-05 18:18:20.188356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:52.054 [2024-11-05 18:18:20.188366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.188469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:52.054 [2024-11-05 18:18:20.188484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:52.054 [2024-11-05 18:18:20.188495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:52.054 [2024-11-05 18:18:20.188505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.188529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:52.054 [2024-11-05 18:18:20.188545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:52.054 [2024-11-05 18:18:20.188556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:52.054 [2024-11-05 18:18:20.188566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.309053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:52.054 [2024-11-05 18:18:20.309107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:52.054 [2024-11-05 18:18:20.309122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:52.054 [2024-11-05 18:18:20.309138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.402648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:52.054 [2024-11-05 18:18:20.402825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:52.054 [2024-11-05 18:18:20.402865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:52.054 [2024-11-05 18:18:20.402875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.402980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:52.054 [2024-11-05 18:18:20.402992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:52.054 [2024-11-05 18:18:20.403003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:52.054 [2024-11-05 18:18:20.403013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.403063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:52.054 [2024-11-05 18:18:20.403075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:52.054 [2024-11-05 18:18:20.403085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:52.054 [2024-11-05 18:18:20.403095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.054 [2024-11-05 18:18:20.403212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:52.054 [2024-11-05 18:18:20.403226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:52.054 [2024-11-05 18:18:20.403236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:52.055 [2024-11-05 18:18:20.403245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.055 [2024-11-05 18:18:20.403279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:52.055 [2024-11-05 18:18:20.403296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:52.055 [2024-11-05 18:18:20.403306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:52.055 [2024-11-05 18:18:20.403324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.055 [2024-11-05 18:18:20.403362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:52.055 [2024-11-05 18:18:20.403372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:52.055 [2024-11-05 18:18:20.403382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:52.055 [2024-11-05 18:18:20.403392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.055 [2024-11-05 18:18:20.403459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:52.055 [2024-11-05 18:18:20.403489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:52.055 [2024-11-05 18:18:20.403499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:52.055 [2024-11-05 18:18:20.403509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:52.055 [2024-11-05 18:18:20.403645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7501.861 ms, result 0 00:28:54.593 18:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:54.593 18:18:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:54.593 18:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:54.593 18:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:54.593 18:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:54.593 18:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81169 00:28:54.593 18:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:54.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.593 18:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:54.594 18:18:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81169 00:28:54.594 18:18:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81169 ']' 00:28:54.594 18:18:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.594 18:18:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:54.594 18:18:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.594 18:18:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:54.594 18:18:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:54.594 [2024-11-05 18:18:23.490342] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:28:54.594 [2024-11-05 18:18:23.490639] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81169 ] 00:28:54.594 [2024-11-05 18:18:23.670151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.594 [2024-11-05 18:18:23.776950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.531 [2024-11-05 18:18:24.713095] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:55.532 [2024-11-05 18:18:24.713322] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:55.792 [2024-11-05 18:18:24.859230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.792 [2024-11-05 18:18:24.859432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:55.792 [2024-11-05 18:18:24.859455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:55.792 [2024-11-05 18:18:24.859466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.792 [2024-11-05 18:18:24.859530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.792 [2024-11-05 18:18:24.859542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:55.792 [2024-11-05 18:18:24.859553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:55.792 [2024-11-05 18:18:24.859563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.792 [2024-11-05 18:18:24.859592] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:55.792 [2024-11-05 18:18:24.860566] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:55.792 [2024-11-05 18:18:24.860589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.792 [2024-11-05 18:18:24.860600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:55.792 [2024-11-05 18:18:24.860611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.009 ms 00:28:55.792 [2024-11-05 18:18:24.860621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.792 [2024-11-05 18:18:24.862105] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:55.792 [2024-11-05 18:18:24.880300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.792 [2024-11-05 18:18:24.880345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:55.792 [2024-11-05 18:18:24.880365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.225 ms 00:28:55.792 [2024-11-05 18:18:24.880376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.793 [2024-11-05 18:18:24.880459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.793 [2024-11-05 18:18:24.880473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:55.793 [2024-11-05 18:18:24.880483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:28:55.793 [2024-11-05 18:18:24.880493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.793 [2024-11-05 18:18:24.887298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.793 [2024-11-05 18:18:24.887490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:55.793 [2024-11-05 18:18:24.887512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.738 ms 00:28:55.793 [2024-11-05 18:18:24.887522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.793 [2024-11-05 18:18:24.887590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.793 [2024-11-05 18:18:24.887603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:55.793 [2024-11-05 18:18:24.887614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:28:55.793 [2024-11-05 18:18:24.887624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.793 [2024-11-05 18:18:24.887667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.793 [2024-11-05 18:18:24.887679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:55.793 [2024-11-05 18:18:24.887693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:55.793 [2024-11-05 18:18:24.887703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.793 [2024-11-05 18:18:24.887728] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:55.793 [2024-11-05 18:18:24.892563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.793 [2024-11-05 18:18:24.892591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:55.793 [2024-11-05 18:18:24.892603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.848 ms 00:28:55.793 [2024-11-05 18:18:24.892633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.793 [2024-11-05 18:18:24.892659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.793 [2024-11-05 18:18:24.892669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:55.793 [2024-11-05 18:18:24.892679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:55.793 [2024-11-05 18:18:24.892688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.793 [2024-11-05 18:18:24.892742] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:55.793 [2024-11-05 18:18:24.892764] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:55.793 [2024-11-05 18:18:24.892802] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:55.793 [2024-11-05 18:18:24.892819] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:55.793 [2024-11-05 18:18:24.892904] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:55.793 [2024-11-05 18:18:24.892917] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:55.793 [2024-11-05 18:18:24.892930] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:55.793 [2024-11-05 18:18:24.892942] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:55.793 [2024-11-05 18:18:24.892953] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:55.793 [2024-11-05 18:18:24.892968] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:55.793 [2024-11-05 18:18:24.892977] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:55.793 [2024-11-05 18:18:24.892987] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:55.793 [2024-11-05 18:18:24.892997] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:55.793 [2024-11-05 18:18:24.893007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.793 [2024-11-05 18:18:24.893016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:55.793 [2024-11-05 18:18:24.893027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:28:55.793 [2024-11-05 18:18:24.893036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.793 [2024-11-05 18:18:24.893106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.793 [2024-11-05 18:18:24.893116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:55.793 [2024-11-05 18:18:24.893126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:55.793 [2024-11-05 18:18:24.893140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.793 [2024-11-05 18:18:24.893226] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:55.793 [2024-11-05 18:18:24.893239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:55.793 [2024-11-05 18:18:24.893249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:55.793 [2024-11-05 18:18:24.893259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.793 [2024-11-05 18:18:24.893270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:55.793 [2024-11-05 18:18:24.893279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:55.793 [2024-11-05 18:18:24.893289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:55.793 [2024-11-05 18:18:24.893298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:55.793 [2024-11-05 18:18:24.893308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:55.793 [2024-11-05 18:18:24.893317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.793 [2024-11-05 18:18:24.893328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:55.793 [2024-11-05 18:18:24.893337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:55.793 [2024-11-05 18:18:24.893346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.793 [2024-11-05 18:18:24.893354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:55.793 [2024-11-05 18:18:24.893363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:55.793 [2024-11-05 18:18:24.893372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.793 [2024-11-05 18:18:24.893381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:55.793 [2024-11-05 18:18:24.893390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:55.793 [2024-11-05 18:18:24.893398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.793 [2024-11-05 18:18:24.893407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:55.793 [2024-11-05 18:18:24.893416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:55.793 [2024-11-05 18:18:24.893446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:55.793 [2024-11-05 18:18:24.893456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:55.793 [2024-11-05 18:18:24.893465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:55.793 [2024-11-05 18:18:24.893474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:55.793 [2024-11-05 18:18:24.893494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:55.793 [2024-11-05 18:18:24.893504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:55.793 [2024-11-05 18:18:24.893513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:55.793 [2024-11-05 18:18:24.893522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:55.793 [2024-11-05 18:18:24.893530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:55.793 [2024-11-05 18:18:24.893539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:55.793 [2024-11-05 18:18:24.893548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:55.793 [2024-11-05 18:18:24.893557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:55.793 [2024-11-05 18:18:24.893566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.793 [2024-11-05 18:18:24.893575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:55.793 [2024-11-05 18:18:24.893584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:55.793 [2024-11-05 18:18:24.893609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.793 [2024-11-05 18:18:24.893617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:55.793 [2024-11-05 18:18:24.893626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:55.793 [2024-11-05 18:18:24.893635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.793 [2024-11-05 18:18:24.893644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:55.793 [2024-11-05 18:18:24.893653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:55.793 [2024-11-05 18:18:24.893663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.793 [2024-11-05 18:18:24.893673] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:55.793 [2024-11-05 18:18:24.893683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:55.793 [2024-11-05 18:18:24.893692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:55.793 [2024-11-05 18:18:24.893702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:55.793 [2024-11-05 18:18:24.893716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:55.793 [2024-11-05 18:18:24.893733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:55.793 [2024-11-05 18:18:24.893742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:55.794 [2024-11-05 18:18:24.893751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:55.794 [2024-11-05 18:18:24.893760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:55.794 [2024-11-05 18:18:24.893770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:55.794 [2024-11-05 18:18:24.893780] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:55.794 [2024-11-05 18:18:24.893792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:55.794 [2024-11-05 18:18:24.893803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:55.794 [2024-11-05 18:18:24.893813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:55.794 [2024-11-05 18:18:24.893823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:55.794 [2024-11-05 18:18:24.893833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:55.794 [2024-11-05 18:18:24.893844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:55.794 [2024-11-05 18:18:24.893854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:55.794 [2024-11-05 18:18:24.893864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:55.794 [2024-11-05 18:18:24.893875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:55.794 [2024-11-05 18:18:24.893884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:55.794 [2024-11-05 18:18:24.893895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:55.794 [2024-11-05 18:18:24.893905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:55.794 [2024-11-05 18:18:24.893916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:55.794 [2024-11-05 18:18:24.893925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:55.794 [2024-11-05 18:18:24.893935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:55.794 [2024-11-05 18:18:24.893945] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:55.794 [2024-11-05 18:18:24.893956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:55.794 [2024-11-05 18:18:24.893967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:55.794 [2024-11-05 18:18:24.893977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:55.794 [2024-11-05 18:18:24.893987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:55.794 [2024-11-05 18:18:24.894003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:55.794 [2024-11-05 18:18:24.894017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.794 [2024-11-05 18:18:24.894028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:55.794 [2024-11-05 18:18:24.894038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.845 ms 00:28:55.794 [2024-11-05 18:18:24.894047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.794 [2024-11-05 18:18:24.894092] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:55.794 [2024-11-05 18:18:24.894109] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:59.996 [2024-11-05 18:18:28.598174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.598234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:59.996 [2024-11-05 18:18:28.598250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3710.094 ms 00:28:59.996 [2024-11-05 18:18:28.598261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.633474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.633522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:59.996 [2024-11-05 18:18:28.633538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.936 ms 00:28:59.996 [2024-11-05 18:18:28.633548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.633630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.633647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:59.996 [2024-11-05 18:18:28.633658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:59.996 [2024-11-05 18:18:28.633668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.673388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.673438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:59.996 [2024-11-05 18:18:28.673453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.741 ms 00:28:59.996 [2024-11-05 18:18:28.673466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.673514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.673525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:59.996 [2024-11-05 18:18:28.673537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:59.996 [2024-11-05 18:18:28.673546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.674065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.674086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:59.996 [2024-11-05 18:18:28.674097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.443 ms 00:28:59.996 [2024-11-05 18:18:28.674108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.674158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.674170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:59.996 [2024-11-05 18:18:28.674181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:59.996 [2024-11-05 18:18:28.674191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.694303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.694341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:59.996 [2024-11-05 18:18:28.694354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.121 ms 00:28:59.996 [2024-11-05 18:18:28.694380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.712867] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:59.996 [2024-11-05 18:18:28.712906] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:59.996 [2024-11-05 18:18:28.712921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.712947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:59.996 [2024-11-05 18:18:28.712959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.446 ms 00:28:59.996 [2024-11-05 18:18:28.712969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.732158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.732198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:59.996 [2024-11-05 18:18:28.732211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.177 ms 00:28:59.996 [2024-11-05 18:18:28.732222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.749756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.749792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:59.996 [2024-11-05 18:18:28.749804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.512 ms 00:28:59.996 [2024-11-05 18:18:28.749829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.766567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.766700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:59.996 [2024-11-05 18:18:28.766720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.725 ms 00:28:59.996 [2024-11-05 18:18:28.766746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.767509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.767534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:59.996 [2024-11-05 18:18:28.767546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.640 ms 00:28:59.996 [2024-11-05 18:18:28.767556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.875005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.875249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:59.996 [2024-11-05 18:18:28.875290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 107.599 ms 00:28:59.996 [2024-11-05 18:18:28.875301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.885645] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:59.996 [2024-11-05 18:18:28.886458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.886487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:59.996 [2024-11-05 18:18:28.886500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.100 ms 00:28:59.996 [2024-11-05 18:18:28.886511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.886592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.886610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:59.996 [2024-11-05 18:18:28.886621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:59.996 [2024-11-05 18:18:28.886631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.886692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.886705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:59.996 [2024-11-05 18:18:28.886716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:59.996 [2024-11-05 18:18:28.886725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.886748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.886759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:59.996 [2024-11-05 18:18:28.886770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:59.996 [2024-11-05 18:18:28.886783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.886818] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:59.996 [2024-11-05 18:18:28.886830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.886840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:59.996 [2024-11-05 18:18:28.886850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:59.996 [2024-11-05 18:18:28.886861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.920239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.996 [2024-11-05 18:18:28.920401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:59.996 [2024-11-05 18:18:28.920438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.410 ms 00:28:59.996 [2024-11-05 18:18:28.920450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.996 [2024-11-05 18:18:28.920590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:59.997 [2024-11-05 18:18:28.920604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:59.997 [2024-11-05 18:18:28.920615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:28:59.997 [2024-11-05 18:18:28.920625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:59.997 [2024-11-05 18:18:28.921712] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4068.598 ms, result 0 00:28:59.997 [2024-11-05 18:18:28.936769] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.997 [2024-11-05 18:18:28.952743] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:59.997 [2024-11-05 18:18:28.961471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:00.256 18:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:00.256 18:18:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:29:00.256 18:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:00.256 18:18:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:00.256 18:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:00.523 [2024-11-05 18:18:29.596773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.523 [2024-11-05 18:18:29.596812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:00.523 [2024-11-05 18:18:29.596826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:00.523 [2024-11-05 18:18:29.596855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.523 [2024-11-05 18:18:29.596878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.523 [2024-11-05 18:18:29.596889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:00.523 [2024-11-05 18:18:29.596899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:00.523 [2024-11-05 18:18:29.596908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.523 [2024-11-05 18:18:29.596927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:00.523 [2024-11-05 18:18:29.596938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:00.523 [2024-11-05 18:18:29.596949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:00.523 [2024-11-05 18:18:29.596958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:00.523 [2024-11-05 18:18:29.597010] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.224 ms, result 0 00:29:00.523 true 00:29:00.524 18:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:00.524 { 00:29:00.524 "name": "ftl", 00:29:00.524 "properties": [ 00:29:00.524 { 00:29:00.524 "name": "superblock_version", 00:29:00.524 "value": 5, 00:29:00.524 "read-only": true 00:29:00.524 }, 00:29:00.524 { 00:29:00.524 "name": "base_device", 00:29:00.524 "bands": [ 00:29:00.524 { 00:29:00.524 "id": 0, 00:29:00.524 "state": "CLOSED", 00:29:00.524 "validity": 1.0 00:29:00.524 }, 00:29:00.524 { 00:29:00.524 "id": 1, 00:29:00.524 "state": "CLOSED", 00:29:00.524 "validity": 1.0 00:29:00.524 }, 00:29:00.524 { 00:29:00.524 "id": 2, 00:29:00.524 "state": "CLOSED", 00:29:00.524 "validity": 0.007843137254901933 00:29:00.524 }, 00:29:00.524 { 00:29:00.524 "id": 3, 00:29:00.524 "state": "FREE", 00:29:00.524 "validity": 0.0 00:29:00.524 }, 00:29:00.524 { 00:29:00.524 "id": 4, 00:29:00.524 "state": "FREE", 00:29:00.524 "validity": 0.0 00:29:00.524 }, 00:29:00.524 { 00:29:00.524 "id": 5, 00:29:00.524 "state": "FREE", 00:29:00.524 "validity": 0.0 00:29:00.524 }, 00:29:00.524 { 00:29:00.524 "id": 6, 00:29:00.524 "state": "FREE", 00:29:00.524 "validity": 0.0 00:29:00.524 }, 00:29:00.524 { 00:29:00.524 "id": 7, 00:29:00.524 "state": "FREE", 00:29:00.524 "validity": 0.0 00:29:00.524 }, 00:29:00.524 { 00:29:00.524 "id": 8, 00:29:00.524 "state": "FREE", 00:29:00.524 "validity": 0.0 00:29:00.524 }, 00:29:00.524 { 00:29:00.524 "id": 9, 00:29:00.524 "state": "FREE", 00:29:00.524 "validity": 0.0 00:29:00.525 }, 00:29:00.525 { 00:29:00.525 "id": 10, 00:29:00.525 "state": "FREE", 00:29:00.525 "validity": 0.0 00:29:00.525 }, 00:29:00.525 { 00:29:00.525 "id": 11, 00:29:00.525 "state": "FREE", 00:29:00.525 "validity": 0.0 00:29:00.525 }, 00:29:00.525 { 00:29:00.525 "id": 12, 00:29:00.525 "state": "FREE", 00:29:00.525 "validity": 0.0 00:29:00.525 }, 00:29:00.525 { 00:29:00.525 "id": 13, 00:29:00.525 "state": "FREE", 00:29:00.525 "validity": 0.0 00:29:00.525 }, 00:29:00.525 { 00:29:00.525 "id": 14, 00:29:00.525 "state": "FREE", 00:29:00.525 "validity": 0.0 00:29:00.525 }, 00:29:00.525 { 00:29:00.525 "id": 15, 00:29:00.525 "state": "FREE", 00:29:00.525 "validity": 0.0 00:29:00.525 }, 00:29:00.525 { 00:29:00.525 "id": 16, 00:29:00.525 "state": "FREE", 00:29:00.525 "validity": 0.0 00:29:00.525 }, 00:29:00.525 { 00:29:00.525 "id": 17, 00:29:00.525 "state": "FREE", 00:29:00.525 "validity": 0.0 00:29:00.525 } 00:29:00.525 ], 00:29:00.525 "read-only": true 00:29:00.525 }, 00:29:00.525 { 00:29:00.525 "name": "cache_device", 00:29:00.525 "type": "bdev", 00:29:00.525 "chunks": [ 00:29:00.525 { 00:29:00.525 "id": 0, 00:29:00.525 "state": "INACTIVE", 00:29:00.525 "utilization": 0.0 00:29:00.525 }, 00:29:00.525 { 00:29:00.525 "id": 1, 00:29:00.525 "state": "OPEN", 00:29:00.525 "utilization": 0.0 00:29:00.525 }, 00:29:00.525 { 00:29:00.525 "id": 2, 00:29:00.525 "state": "OPEN", 00:29:00.525 "utilization": 0.0 00:29:00.525 }, 00:29:00.525 { 00:29:00.526 "id": 3, 00:29:00.526 "state": "FREE", 00:29:00.526 "utilization": 0.0 00:29:00.526 }, 00:29:00.526 { 00:29:00.526 "id": 4, 00:29:00.526 "state": "FREE", 00:29:00.526 "utilization": 0.0 00:29:00.526 } 00:29:00.526 ], 00:29:00.526 "read-only": true 00:29:00.526 }, 00:29:00.526 { 00:29:00.526 "name": "verbose_mode", 00:29:00.526 "value": true, 00:29:00.526 "unit": "", 00:29:00.526 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:00.526 }, 00:29:00.526 { 00:29:00.526 "name": "prep_upgrade_on_shutdown", 00:29:00.526 "value": false, 00:29:00.526 "unit": "", 00:29:00.526 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:00.526 } 00:29:00.526 ] 00:29:00.526 } 00:29:00.526 18:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:00.526 18:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:00.526 18:18:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:00.787 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:00.787 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:00.787 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:00.787 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:00.787 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:01.046 Validate MD5 checksum, iteration 1 00:29:01.046 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:01.046 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:01.046 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:01.046 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:01.046 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:01.046 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:01.046 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:01.046 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:01.046 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:01.047 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:01.047 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:01.047 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:01.047 18:18:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:01.047 [2024-11-05 18:18:30.316673] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:01.047 [2024-11-05 18:18:30.317056] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81258 ] 00:29:01.306 [2024-11-05 18:18:30.501421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.306 [2024-11-05 18:18:30.608247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.211  [2024-11-05T18:18:33.103Z] Copying: 663/1024 [MB] (663 MBps) [2024-11-05T18:18:34.480Z] Copying: 1024/1024 [MB] (average 642 MBps) 00:29:05.157 00:29:05.157 18:18:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:05.157 18:18:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:07.063 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:07.063 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ed24e0f49df1d4d5b71d954cd18c8164 00:29:07.063 Validate MD5 checksum, iteration 2 00:29:07.063 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ed24e0f49df1d4d5b71d954cd18c8164 != \e\d\2\4\e\0\f\4\9\d\f\1\d\4\d\5\b\7\1\d\9\5\4\c\d\1\8\c\8\1\6\4 ]] 00:29:07.063 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:07.063 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:07.063 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:07.063 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:07.063 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:07.063 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:07.063 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:07.063 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:07.063 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:07.063 [2024-11-05 18:18:36.137841] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:07.063 [2024-11-05 18:18:36.138179] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81325 ] 00:29:07.063 [2024-11-05 18:18:36.321311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.322 [2024-11-05 18:18:36.429938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.227  [2024-11-05T18:18:38.809Z] Copying: 619/1024 [MB] (619 MBps) [2024-11-05T18:18:41.345Z] Copying: 1024/1024 [MB] (average 622 MBps) 00:29:12.022 00:29:12.022 18:18:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:12.022 18:18:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fed90cc23c5b095bd8e2b8118e88dd81 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fed90cc23c5b095bd8e2b8118e88dd81 != \f\e\d\9\0\c\c\2\3\c\5\b\0\9\5\b\d\8\e\2\b\8\1\1\8\e\8\8\d\d\8\1 ]] 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81169 ]] 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81169 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81392 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81392 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81392 ']' 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:13.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:13.928 18:18:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:13.928 [2024-11-05 18:18:42.865301] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:13.928 [2024-11-05 18:18:42.865429] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81392 ] 00:29:13.928 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 81169 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:13.928 [2024-11-05 18:18:43.043667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.928 [2024-11-05 18:18:43.174763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.308 [2024-11-05 18:18:44.245758] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:15.308 [2024-11-05 18:18:44.245931] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:15.308 [2024-11-05 18:18:44.393074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.308 [2024-11-05 18:18:44.393119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:15.308 [2024-11-05 18:18:44.393135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:15.308 [2024-11-05 18:18:44.393146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.308 [2024-11-05 18:18:44.393201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.308 [2024-11-05 18:18:44.393213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:15.308 [2024-11-05 18:18:44.393224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:15.308 [2024-11-05 18:18:44.393233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.308 [2024-11-05 18:18:44.393262] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:15.308 [2024-11-05 18:18:44.394240] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:15.308 [2024-11-05 18:18:44.394268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.308 [2024-11-05 18:18:44.394281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:15.308 [2024-11-05 18:18:44.394292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.018 ms 00:29:15.308 [2024-11-05 18:18:44.394302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.308 [2024-11-05 18:18:44.394768] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:15.308 [2024-11-05 18:18:44.420629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.308 [2024-11-05 18:18:44.420668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:15.308 [2024-11-05 18:18:44.420683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.903 ms 00:29:15.308 [2024-11-05 18:18:44.420694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.308 [2024-11-05 18:18:44.434535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.308 [2024-11-05 18:18:44.434582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:15.308 [2024-11-05 18:18:44.434601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:15.308 [2024-11-05 18:18:44.434611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.308 [2024-11-05 18:18:44.435256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.308 [2024-11-05 18:18:44.435277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:15.308 [2024-11-05 18:18:44.435289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.565 ms 00:29:15.308 [2024-11-05 18:18:44.435299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.308 [2024-11-05 18:18:44.435363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.308 [2024-11-05 18:18:44.435380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:15.309 [2024-11-05 18:18:44.435390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:29:15.309 [2024-11-05 18:18:44.435400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.309 [2024-11-05 18:18:44.435447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.309 [2024-11-05 18:18:44.435458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:15.309 [2024-11-05 18:18:44.435469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:15.309 [2024-11-05 18:18:44.435478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.309 [2024-11-05 18:18:44.435504] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:15.309 [2024-11-05 18:18:44.439395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.309 [2024-11-05 18:18:44.439432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:15.309 [2024-11-05 18:18:44.439444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.903 ms 00:29:15.309 [2024-11-05 18:18:44.439454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.309 [2024-11-05 18:18:44.439485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.309 [2024-11-05 18:18:44.439495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:15.309 [2024-11-05 18:18:44.439506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:15.309 [2024-11-05 18:18:44.439516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.309 [2024-11-05 18:18:44.439551] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:15.309 [2024-11-05 18:18:44.439596] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:15.309 [2024-11-05 18:18:44.439631] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:15.309 [2024-11-05 18:18:44.439653] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:15.309 [2024-11-05 18:18:44.439741] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:15.309 [2024-11-05 18:18:44.439755] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:15.309 [2024-11-05 18:18:44.439769] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:15.309 [2024-11-05 18:18:44.439781] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:15.309 [2024-11-05 18:18:44.439793] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:15.309 [2024-11-05 18:18:44.439804] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:15.309 [2024-11-05 18:18:44.439814] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:15.309 [2024-11-05 18:18:44.439823] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:15.309 [2024-11-05 18:18:44.439833] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:15.309 [2024-11-05 18:18:44.439843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.309 [2024-11-05 18:18:44.439857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:15.309 [2024-11-05 18:18:44.439867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.295 ms 00:29:15.309 [2024-11-05 18:18:44.439876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.309 [2024-11-05 18:18:44.439943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.309 [2024-11-05 18:18:44.439954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:15.309 [2024-11-05 18:18:44.439964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:29:15.309 [2024-11-05 18:18:44.439975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.309 [2024-11-05 18:18:44.440057] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:15.309 [2024-11-05 18:18:44.440070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:15.309 [2024-11-05 18:18:44.440084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:15.309 [2024-11-05 18:18:44.440095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.309 [2024-11-05 18:18:44.440105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:15.309 [2024-11-05 18:18:44.440114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:15.309 [2024-11-05 18:18:44.440123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:15.309 [2024-11-05 18:18:44.440133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:15.309 [2024-11-05 18:18:44.440143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:15.309 [2024-11-05 18:18:44.440152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.309 [2024-11-05 18:18:44.440163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:15.309 [2024-11-05 18:18:44.440173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:15.309 [2024-11-05 18:18:44.440182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.309 [2024-11-05 18:18:44.440191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:15.309 [2024-11-05 18:18:44.440201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:15.309 [2024-11-05 18:18:44.440210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.309 [2024-11-05 18:18:44.440219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:15.309 [2024-11-05 18:18:44.440228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:15.309 [2024-11-05 18:18:44.440237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.309 [2024-11-05 18:18:44.440247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:15.309 [2024-11-05 18:18:44.440256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:15.309 [2024-11-05 18:18:44.440265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:15.309 [2024-11-05 18:18:44.440274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:15.309 [2024-11-05 18:18:44.440295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:15.309 [2024-11-05 18:18:44.440304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:15.309 [2024-11-05 18:18:44.440313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:15.309 [2024-11-05 18:18:44.440322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:15.309 [2024-11-05 18:18:44.440331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:15.309 [2024-11-05 18:18:44.440339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:15.309 [2024-11-05 18:18:44.440348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:15.309 [2024-11-05 18:18:44.440357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:15.309 [2024-11-05 18:18:44.440367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:15.309 [2024-11-05 18:18:44.440376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:15.309 [2024-11-05 18:18:44.440386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.309 [2024-11-05 18:18:44.440395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:15.309 [2024-11-05 18:18:44.440404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:15.309 [2024-11-05 18:18:44.440430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.309 [2024-11-05 18:18:44.440439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:15.309 [2024-11-05 18:18:44.440449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:15.309 [2024-11-05 18:18:44.440458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.309 [2024-11-05 18:18:44.440467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:15.309 [2024-11-05 18:18:44.440476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:15.309 [2024-11-05 18:18:44.440487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.309 [2024-11-05 18:18:44.440497] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:15.309 [2024-11-05 18:18:44.440507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:15.309 [2024-11-05 18:18:44.440517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:15.309 [2024-11-05 18:18:44.440527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:15.309 [2024-11-05 18:18:44.440537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:15.309 [2024-11-05 18:18:44.440546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:15.309 [2024-11-05 18:18:44.440555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:15.309 [2024-11-05 18:18:44.440564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:15.309 [2024-11-05 18:18:44.440572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:15.309 [2024-11-05 18:18:44.440582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:15.309 [2024-11-05 18:18:44.440592] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:15.309 [2024-11-05 18:18:44.440605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:15.309 [2024-11-05 18:18:44.440616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:15.309 [2024-11-05 18:18:44.440627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:15.309 [2024-11-05 18:18:44.440637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:15.309 [2024-11-05 18:18:44.440648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:15.309 [2024-11-05 18:18:44.440658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:15.309 [2024-11-05 18:18:44.440669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:15.309 [2024-11-05 18:18:44.440679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:15.309 [2024-11-05 18:18:44.440690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:15.309 [2024-11-05 18:18:44.440700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:15.309 [2024-11-05 18:18:44.440710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:15.309 [2024-11-05 18:18:44.440719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:15.310 [2024-11-05 18:18:44.440728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:15.310 [2024-11-05 18:18:44.440738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:15.310 [2024-11-05 18:18:44.440749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:15.310 [2024-11-05 18:18:44.440759] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:15.310 [2024-11-05 18:18:44.440770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:15.310 [2024-11-05 18:18:44.440781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:15.310 [2024-11-05 18:18:44.440790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:15.310 [2024-11-05 18:18:44.440800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:15.310 [2024-11-05 18:18:44.440826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:15.310 [2024-11-05 18:18:44.440837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.310 [2024-11-05 18:18:44.440854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:15.310 [2024-11-05 18:18:44.440864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.832 ms 00:29:15.310 [2024-11-05 18:18:44.440874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.310 [2024-11-05 18:18:44.483564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.310 [2024-11-05 18:18:44.483600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:15.310 [2024-11-05 18:18:44.483614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.707 ms 00:29:15.310 [2024-11-05 18:18:44.483625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.310 [2024-11-05 18:18:44.483663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.310 [2024-11-05 18:18:44.483674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:15.310 [2024-11-05 18:18:44.483685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:15.310 [2024-11-05 18:18:44.483696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.310 [2024-11-05 18:18:44.535115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.310 [2024-11-05 18:18:44.535169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:15.310 [2024-11-05 18:18:44.535183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 51.444 ms 00:29:15.310 [2024-11-05 18:18:44.535194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.310 [2024-11-05 18:18:44.535232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.310 [2024-11-05 18:18:44.535243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:15.310 [2024-11-05 18:18:44.535255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:15.310 [2024-11-05 18:18:44.535265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.310 [2024-11-05 18:18:44.535406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.310 [2024-11-05 18:18:44.535442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:15.310 [2024-11-05 18:18:44.535454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:29:15.310 [2024-11-05 18:18:44.535465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.310 [2024-11-05 18:18:44.535511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.310 [2024-11-05 18:18:44.535523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:15.310 [2024-11-05 18:18:44.535534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:15.310 [2024-11-05 18:18:44.535544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.310 [2024-11-05 18:18:44.560613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.310 [2024-11-05 18:18:44.560647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:15.310 [2024-11-05 18:18:44.560660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.082 ms 00:29:15.310 [2024-11-05 18:18:44.560672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.310 [2024-11-05 18:18:44.560793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.310 [2024-11-05 18:18:44.560809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:15.310 [2024-11-05 18:18:44.560820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:15.310 [2024-11-05 18:18:44.560830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.310 [2024-11-05 18:18:44.617117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.310 [2024-11-05 18:18:44.617167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:15.310 [2024-11-05 18:18:44.617186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 56.356 ms 00:29:15.310 [2024-11-05 18:18:44.617202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.574 [2024-11-05 18:18:44.631815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.574 [2024-11-05 18:18:44.631870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:15.574 [2024-11-05 18:18:44.631894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.659 ms 00:29:15.574 [2024-11-05 18:18:44.631906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.574 [2024-11-05 18:18:44.722492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.574 [2024-11-05 18:18:44.722547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:15.574 [2024-11-05 18:18:44.722570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 90.669 ms 00:29:15.574 [2024-11-05 18:18:44.722582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.574 [2024-11-05 18:18:44.722796] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:15.574 [2024-11-05 18:18:44.722971] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:15.574 [2024-11-05 18:18:44.723136] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:15.574 [2024-11-05 18:18:44.723291] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:15.574 [2024-11-05 18:18:44.723305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.574 [2024-11-05 18:18:44.723317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:15.574 [2024-11-05 18:18:44.723330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.677 ms 00:29:15.574 [2024-11-05 18:18:44.723340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.574 [2024-11-05 18:18:44.723402] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:15.574 [2024-11-05 18:18:44.723441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.575 [2024-11-05 18:18:44.723458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:15.575 [2024-11-05 18:18:44.723470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:29:15.575 [2024-11-05 18:18:44.723480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.575 [2024-11-05 18:18:44.744079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.575 [2024-11-05 18:18:44.744142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:15.575 [2024-11-05 18:18:44.744158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.606 ms 00:29:15.575 [2024-11-05 18:18:44.744168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.575 [2024-11-05 18:18:44.757183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.575 [2024-11-05 18:18:44.757220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:15.575 [2024-11-05 18:18:44.757234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:29:15.575 [2024-11-05 18:18:44.757245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:15.575 [2024-11-05 18:18:44.757359] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:15.575 [2024-11-05 18:18:44.757732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:15.575 [2024-11-05 18:18:44.757751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:15.575 [2024-11-05 18:18:44.757764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.375 ms 00:29:15.575 [2024-11-05 18:18:44.757775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.169 [2024-11-05 18:18:45.340897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.169 [2024-11-05 18:18:45.340958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:16.169 [2024-11-05 18:18:45.340977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 582.874 ms 00:29:16.169 [2024-11-05 18:18:45.340988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.169 [2024-11-05 18:18:45.347231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.169 [2024-11-05 18:18:45.347275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:16.169 [2024-11-05 18:18:45.347289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.445 ms 00:29:16.169 [2024-11-05 18:18:45.347301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.169 [2024-11-05 18:18:45.347890] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:16.169 [2024-11-05 18:18:45.347922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.169 [2024-11-05 18:18:45.347934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:16.169 [2024-11-05 18:18:45.347947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.583 ms 00:29:16.169 [2024-11-05 18:18:45.347959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.169 [2024-11-05 18:18:45.347992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.169 [2024-11-05 18:18:45.348005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:16.169 [2024-11-05 18:18:45.348016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:16.169 [2024-11-05 18:18:45.348027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.169 [2024-11-05 18:18:45.348070] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 591.673 ms, result 0 00:29:16.169 [2024-11-05 18:18:45.348118] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:16.169 [2024-11-05 18:18:45.348257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.169 [2024-11-05 18:18:45.348269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:16.169 [2024-11-05 18:18:45.348279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.141 ms 00:29:16.169 [2024-11-05 18:18:45.348289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.933000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.739 [2024-11-05 18:18:45.933056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:16.739 [2024-11-05 18:18:45.933073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 584.439 ms 00:29:16.739 [2024-11-05 18:18:45.933086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.939135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.739 [2024-11-05 18:18:45.939177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:16.739 [2024-11-05 18:18:45.939190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.566 ms 00:29:16.739 [2024-11-05 18:18:45.939201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.939750] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:16.739 [2024-11-05 18:18:45.939776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.739 [2024-11-05 18:18:45.939787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:16.739 [2024-11-05 18:18:45.939799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.542 ms 00:29:16.739 [2024-11-05 18:18:45.939810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.939843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.739 [2024-11-05 18:18:45.939854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:16.739 [2024-11-05 18:18:45.939865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:16.739 [2024-11-05 18:18:45.939875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.939923] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 592.754 ms, result 0 00:29:16.739 [2024-11-05 18:18:45.939986] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:16.739 [2024-11-05 18:18:45.940000] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:16.739 [2024-11-05 18:18:45.940013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.739 [2024-11-05 18:18:45.940025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:16.739 [2024-11-05 18:18:45.940036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1184.598 ms 00:29:16.739 [2024-11-05 18:18:45.940046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.940078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.739 [2024-11-05 18:18:45.940090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:16.739 [2024-11-05 18:18:45.940106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:16.739 [2024-11-05 18:18:45.940117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.951925] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:16.739 [2024-11-05 18:18:45.952067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.739 [2024-11-05 18:18:45.952080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:16.739 [2024-11-05 18:18:45.952092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.953 ms 00:29:16.739 [2024-11-05 18:18:45.952103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.952698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.739 [2024-11-05 18:18:45.952715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:16.739 [2024-11-05 18:18:45.952730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.525 ms 00:29:16.739 [2024-11-05 18:18:45.952741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.954726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.739 [2024-11-05 18:18:45.954896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:16.739 [2024-11-05 18:18:45.954918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.970 ms 00:29:16.739 [2024-11-05 18:18:45.954928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.954985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.739 [2024-11-05 18:18:45.954999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:16.739 [2024-11-05 18:18:45.955010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:16.739 [2024-11-05 18:18:45.955025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.955132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.739 [2024-11-05 18:18:45.955144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:16.739 [2024-11-05 18:18:45.955155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:16.739 [2024-11-05 18:18:45.955165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.955189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.739 [2024-11-05 18:18:45.955199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:16.739 [2024-11-05 18:18:45.955211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:16.739 [2024-11-05 18:18:45.955221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.955258] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:16.739 [2024-11-05 18:18:45.955274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.739 [2024-11-05 18:18:45.955284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:16.739 [2024-11-05 18:18:45.955294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:16.739 [2024-11-05 18:18:45.955303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.955359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:16.739 [2024-11-05 18:18:45.955371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:16.739 [2024-11-05 18:18:45.955381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:16.739 [2024-11-05 18:18:45.955392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:16.739 [2024-11-05 18:18:45.956739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1565.639 ms, result 0 00:29:16.739 [2024-11-05 18:18:45.972374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:16.739 [2024-11-05 18:18:45.988342] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:16.739 [2024-11-05 18:18:45.998953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:16.739 Validate MD5 checksum, iteration 1 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:16.739 18:18:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:16.998 [2024-11-05 18:18:46.136353] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:16.998 [2024-11-05 18:18:46.136495] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81438 ] 00:29:16.999 [2024-11-05 18:18:46.315209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.257 [2024-11-05 18:18:46.421294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.161  [2024-11-05T18:18:48.744Z] Copying: 620/1024 [MB] (620 MBps) [2024-11-05T18:18:52.033Z] Copying: 1024/1024 [MB] (average 617 MBps) 00:29:22.710 00:29:22.710 18:18:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:22.710 18:18:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:24.087 18:18:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:24.087 Validate MD5 checksum, iteration 2 00:29:24.087 18:18:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ed24e0f49df1d4d5b71d954cd18c8164 00:29:24.087 18:18:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ed24e0f49df1d4d5b71d954cd18c8164 != \e\d\2\4\e\0\f\4\9\d\f\1\d\4\d\5\b\7\1\d\9\5\4\c\d\1\8\c\8\1\6\4 ]] 00:29:24.087 18:18:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:24.087 18:18:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:24.087 18:18:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:24.087 18:18:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:24.087 18:18:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:24.087 18:18:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:24.087 18:18:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:24.087 18:18:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:24.087 18:18:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:24.087 [2024-11-05 18:18:53.065932] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:24.087 [2024-11-05 18:18:53.066808] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81510 ] 00:29:24.087 [2024-11-05 18:18:53.244573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.087 [2024-11-05 18:18:53.351749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.993  [2024-11-05T18:18:55.884Z] Copying: 616/1024 [MB] (616 MBps) [2024-11-05T18:18:57.263Z] Copying: 1024/1024 [MB] (average 617 MBps) 00:29:27.940 00:29:27.940 18:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:27.940 18:18:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fed90cc23c5b095bd8e2b8118e88dd81 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fed90cc23c5b095bd8e2b8118e88dd81 != \f\e\d\9\0\c\c\2\3\c\5\b\0\9\5\b\d\8\e\2\b\8\1\1\8\e\8\8\d\d\8\1 ]] 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81392 ]] 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81392 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81392 ']' 00:29:29.319 18:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81392 00:29:29.579 18:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:29:29.579 18:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:29.579 18:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81392 00:29:29.579 killing process with pid 81392 00:29:29.579 18:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:29.579 18:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:29.579 18:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81392' 00:29:29.579 18:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81392 00:29:29.579 18:18:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81392 00:29:30.516 [2024-11-05 18:18:59.840394] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:30.775 [2024-11-05 18:18:59.858939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.775 [2024-11-05 18:18:59.858983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:30.775 [2024-11-05 18:18:59.859000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:30.775 [2024-11-05 18:18:59.859011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.775 [2024-11-05 18:18:59.859036] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:30.775 [2024-11-05 18:18:59.863421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.775 [2024-11-05 18:18:59.863451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:30.775 [2024-11-05 18:18:59.863463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.376 ms 00:29:30.775 [2024-11-05 18:18:59.863478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.775 [2024-11-05 18:18:59.863697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.775 [2024-11-05 18:18:59.863711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:30.775 [2024-11-05 18:18:59.863722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.191 ms 00:29:30.775 [2024-11-05 18:18:59.863732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.775 [2024-11-05 18:18:59.865042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.775 [2024-11-05 18:18:59.865073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:30.775 [2024-11-05 18:18:59.865086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.295 ms 00:29:30.775 [2024-11-05 18:18:59.865097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.775 [2024-11-05 18:18:59.866062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.776 [2024-11-05 18:18:59.866255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:30.776 [2024-11-05 18:18:59.866276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.907 ms 00:29:30.776 [2024-11-05 18:18:59.866288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.776 [2024-11-05 18:18:59.880258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.776 [2024-11-05 18:18:59.880294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:30.776 [2024-11-05 18:18:59.880310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.944 ms 00:29:30.776 [2024-11-05 18:18:59.880326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.776 [2024-11-05 18:18:59.888255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.776 [2024-11-05 18:18:59.888415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:30.776 [2024-11-05 18:18:59.888446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.904 ms 00:29:30.776 [2024-11-05 18:18:59.888457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.776 [2024-11-05 18:18:59.888541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.776 [2024-11-05 18:18:59.888554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:30.776 [2024-11-05 18:18:59.888565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:29:30.776 [2024-11-05 18:18:59.888577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.776 [2024-11-05 18:18:59.902535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.776 [2024-11-05 18:18:59.902697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:30.776 [2024-11-05 18:18:59.902718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.957 ms 00:29:30.776 [2024-11-05 18:18:59.902729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.776 [2024-11-05 18:18:59.916433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.776 [2024-11-05 18:18:59.916465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:30.776 [2024-11-05 18:18:59.916477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.645 ms 00:29:30.776 [2024-11-05 18:18:59.916487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.776 [2024-11-05 18:18:59.929895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.776 [2024-11-05 18:18:59.930039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:30.776 [2024-11-05 18:18:59.930075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.395 ms 00:29:30.776 [2024-11-05 18:18:59.930086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.776 [2024-11-05 18:18:59.944028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.776 [2024-11-05 18:18:59.944065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:30.776 [2024-11-05 18:18:59.944078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.842 ms 00:29:30.776 [2024-11-05 18:18:59.944089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.776 [2024-11-05 18:18:59.944126] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:30.776 [2024-11-05 18:18:59.944144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:30.776 [2024-11-05 18:18:59.944156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:30.776 [2024-11-05 18:18:59.944168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:30.776 [2024-11-05 18:18:59.944178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:30.776 [2024-11-05 18:18:59.944338] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:30.776 [2024-11-05 18:18:59.944349] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d904b2d8-2e4f-4554-a6b3-d28ff20c020c 00:29:30.776 [2024-11-05 18:18:59.944361] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:30.776 [2024-11-05 18:18:59.944371] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:30.776 [2024-11-05 18:18:59.944381] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:30.776 [2024-11-05 18:18:59.944391] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:30.776 [2024-11-05 18:18:59.944401] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:30.776 [2024-11-05 18:18:59.944420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:30.776 [2024-11-05 18:18:59.944431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:30.776 [2024-11-05 18:18:59.944439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:30.776 [2024-11-05 18:18:59.944448] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:30.776 [2024-11-05 18:18:59.944460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.776 [2024-11-05 18:18:59.944477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:30.776 [2024-11-05 18:18:59.944488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.336 ms 00:29:30.776 [2024-11-05 18:18:59.944499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.776 [2024-11-05 18:18:59.965011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.776 [2024-11-05 18:18:59.965151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:30.776 [2024-11-05 18:18:59.965228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.514 ms 00:29:30.776 [2024-11-05 18:18:59.965265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.776 [2024-11-05 18:18:59.965930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:30.776 [2024-11-05 18:18:59.966046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:30.776 [2024-11-05 18:18:59.966122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.619 ms 00:29:30.776 [2024-11-05 18:18:59.966159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.776 [2024-11-05 18:19:00.035337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:30.776 [2024-11-05 18:19:00.035501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:30.776 [2024-11-05 18:19:00.035632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:30.776 [2024-11-05 18:19:00.035671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.776 [2024-11-05 18:19:00.035740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:30.776 [2024-11-05 18:19:00.035774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:30.776 [2024-11-05 18:19:00.035862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:30.776 [2024-11-05 18:19:00.035898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.776 [2024-11-05 18:19:00.036019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:30.776 [2024-11-05 18:19:00.036062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:30.776 [2024-11-05 18:19:00.036169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:30.776 [2024-11-05 18:19:00.036206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:30.776 [2024-11-05 18:19:00.036252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:30.776 [2024-11-05 18:19:00.036292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:30.776 [2024-11-05 18:19:00.036323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:30.776 [2024-11-05 18:19:00.036443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-05 18:19:00.165983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:31.035 [2024-11-05 18:19:00.166172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:31.035 [2024-11-05 18:19:00.166302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:31.035 [2024-11-05 18:19:00.166340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-05 18:19:00.265602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:31.035 [2024-11-05 18:19:00.265660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:31.035 [2024-11-05 18:19:00.265675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:31.035 [2024-11-05 18:19:00.265687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-05 18:19:00.265861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:31.035 [2024-11-05 18:19:00.265877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:31.035 [2024-11-05 18:19:00.265888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:31.035 [2024-11-05 18:19:00.265899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-05 18:19:00.265950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:31.035 [2024-11-05 18:19:00.265962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:31.035 [2024-11-05 18:19:00.265978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:31.035 [2024-11-05 18:19:00.266001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-05 18:19:00.266135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:31.035 [2024-11-05 18:19:00.266151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:31.035 [2024-11-05 18:19:00.266161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:31.035 [2024-11-05 18:19:00.266171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-05 18:19:00.266210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:31.035 [2024-11-05 18:19:00.266224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:31.035 [2024-11-05 18:19:00.266234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:31.035 [2024-11-05 18:19:00.266249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.035 [2024-11-05 18:19:00.266297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:31.035 [2024-11-05 18:19:00.266309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:31.036 [2024-11-05 18:19:00.266320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:31.036 [2024-11-05 18:19:00.266331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.036 [2024-11-05 18:19:00.266400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:31.036 [2024-11-05 18:19:00.266438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:31.036 [2024-11-05 18:19:00.266453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:31.036 [2024-11-05 18:19:00.266464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:31.036 [2024-11-05 18:19:00.266612] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 408.293 ms, result 0 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:32.453 Remove shared memory files 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81169 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:32.453 ************************************ 00:29:32.453 END TEST ftl_upgrade_shutdown 00:29:32.453 ************************************ 00:29:32.453 00:29:32.453 real 1m27.175s 00:29:32.453 user 1m57.041s 00:29:32.453 sys 0m24.329s 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:32.453 18:19:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:32.453 18:19:01 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:29:32.453 18:19:01 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:29:32.453 18:19:01 ftl -- ftl/ftl.sh@14 -- # killprocess 73687 00:29:32.453 18:19:01 ftl -- common/autotest_common.sh@952 -- # '[' -z 73687 ']' 00:29:32.453 18:19:01 ftl -- common/autotest_common.sh@956 -- # kill -0 73687 00:29:32.453 Process with pid 73687 is not found 00:29:32.453 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (73687) - No such process 00:29:32.453 18:19:01 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 73687 is not found' 00:29:32.453 18:19:01 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:29:32.453 18:19:01 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81637 00:29:32.453 18:19:01 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:32.453 18:19:01 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81637 00:29:32.453 18:19:01 ftl -- common/autotest_common.sh@833 -- # '[' -z 81637 ']' 00:29:32.453 18:19:01 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.453 18:19:01 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:32.453 18:19:01 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.453 18:19:01 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:32.453 18:19:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:32.453 [2024-11-05 18:19:01.775370] Starting SPDK v25.01-pre git sha1 8053cd6b8 / DPDK 24.03.0 initialization... 00:29:32.453 [2024-11-05 18:19:01.775508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81637 ] 00:29:32.712 [2024-11-05 18:19:01.954480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.971 [2024-11-05 18:19:02.086892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.910 18:19:03 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:33.910 18:19:03 ftl -- common/autotest_common.sh@866 -- # return 0 00:29:33.910 18:19:03 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:34.178 nvme0n1 00:29:34.178 18:19:03 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:29:34.178 18:19:03 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:34.178 18:19:03 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:34.440 18:19:03 ftl -- ftl/common.sh@28 -- # stores=1960a416-1ce4-45b5-9b5e-c6cd51afb6f6 00:29:34.440 18:19:03 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:29:34.440 18:19:03 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1960a416-1ce4-45b5-9b5e-c6cd51afb6f6 00:29:34.440 18:19:03 ftl -- ftl/ftl.sh@23 -- # killprocess 81637 00:29:34.440 18:19:03 ftl -- common/autotest_common.sh@952 -- # '[' -z 81637 ']' 00:29:34.440 18:19:03 ftl -- common/autotest_common.sh@956 -- # kill -0 81637 00:29:34.440 18:19:03 ftl -- common/autotest_common.sh@957 -- # uname 00:29:34.440 18:19:03 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:34.440 18:19:03 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81637 00:29:34.440 killing process with pid 81637 00:29:34.440 18:19:03 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:34.440 18:19:03 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:34.440 18:19:03 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81637' 00:29:34.440 18:19:03 ftl -- common/autotest_common.sh@971 -- # kill 81637 00:29:34.440 18:19:03 ftl -- common/autotest_common.sh@976 -- # wait 81637 00:29:36.976 18:19:06 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:37.236 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:37.495 Waiting for block devices as requested 00:29:37.495 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:37.495 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:37.754 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:37.754 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:43.028 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:43.028 18:19:12 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:29:43.028 18:19:12 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:43.028 Remove shared memory files 00:29:43.028 18:19:12 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:29:43.028 18:19:12 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:29:43.028 18:19:12 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:29:43.028 18:19:12 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:43.028 18:19:12 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:29:43.028 ************************************ 00:29:43.028 END TEST ftl 00:29:43.028 ************************************ 00:29:43.028 00:29:43.028 real 11m38.996s 00:29:43.028 user 14m7.115s 00:29:43.028 sys 1m31.919s 00:29:43.028 18:19:12 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:43.028 18:19:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:43.028 18:19:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:43.028 18:19:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:43.028 18:19:12 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:43.028 18:19:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:43.028 18:19:12 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:29:43.028 18:19:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:43.028 18:19:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:43.028 18:19:12 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:29:43.028 18:19:12 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:29:43.028 18:19:12 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:29:43.028 18:19:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:43.028 18:19:12 -- common/autotest_common.sh@10 -- # set +x 00:29:43.028 18:19:12 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:29:43.028 18:19:12 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:29:43.028 18:19:12 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:29:43.028 18:19:12 -- common/autotest_common.sh@10 -- # set +x 00:29:45.563 INFO: APP EXITING 00:29:45.563 INFO: killing all VMs 00:29:45.563 INFO: killing vhost app 00:29:45.563 INFO: EXIT DONE 00:29:46.132 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:46.391 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:46.391 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:46.651 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:29:46.651 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:29:47.219 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:47.478 Cleaning 00:29:47.478 Removing: /var/run/dpdk/spdk0/config 00:29:47.478 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:47.478 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:47.478 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:47.478 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:47.478 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:47.478 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:47.478 Removing: /var/run/dpdk/spdk0 00:29:47.478 Removing: /var/run/dpdk/spdk_pid57454 00:29:47.478 Removing: /var/run/dpdk/spdk_pid57691 00:29:47.478 Removing: /var/run/dpdk/spdk_pid57923 00:29:47.478 Removing: /var/run/dpdk/spdk_pid58033 00:29:47.478 Removing: /var/run/dpdk/spdk_pid58078 00:29:47.478 Removing: /var/run/dpdk/spdk_pid58217 00:29:47.478 Removing: /var/run/dpdk/spdk_pid58235 00:29:47.478 Removing: /var/run/dpdk/spdk_pid58445 00:29:47.478 Removing: /var/run/dpdk/spdk_pid58551 00:29:47.478 Removing: /var/run/dpdk/spdk_pid58658 00:29:47.478 Removing: /var/run/dpdk/spdk_pid58780 00:29:47.478 Removing: /var/run/dpdk/spdk_pid58888 00:29:47.478 Removing: /var/run/dpdk/spdk_pid58928 00:29:47.478 Removing: /var/run/dpdk/spdk_pid58964 00:29:47.738 Removing: /var/run/dpdk/spdk_pid59040 00:29:47.738 Removing: /var/run/dpdk/spdk_pid59141 00:29:47.738 Removing: /var/run/dpdk/spdk_pid59588 00:29:47.738 Removing: /var/run/dpdk/spdk_pid59663 00:29:47.738 Removing: /var/run/dpdk/spdk_pid59732 00:29:47.738 Removing: /var/run/dpdk/spdk_pid59753 00:29:47.738 Removing: /var/run/dpdk/spdk_pid59906 00:29:47.738 Removing: /var/run/dpdk/spdk_pid59926 00:29:47.738 Removing: /var/run/dpdk/spdk_pid60076 00:29:47.738 Removing: /var/run/dpdk/spdk_pid60092 00:29:47.738 Removing: /var/run/dpdk/spdk_pid60156 00:29:47.738 Removing: /var/run/dpdk/spdk_pid60180 00:29:47.738 Removing: /var/run/dpdk/spdk_pid60244 00:29:47.738 Removing: /var/run/dpdk/spdk_pid60267 00:29:47.738 Removing: /var/run/dpdk/spdk_pid60462 00:29:47.738 Removing: /var/run/dpdk/spdk_pid60499 00:29:47.738 Removing: /var/run/dpdk/spdk_pid60588 00:29:47.738 Removing: /var/run/dpdk/spdk_pid60777 00:29:47.738 Removing: /var/run/dpdk/spdk_pid60880 00:29:47.738 Removing: /var/run/dpdk/spdk_pid60922 00:29:47.738 Removing: /var/run/dpdk/spdk_pid61372 00:29:47.738 Removing: /var/run/dpdk/spdk_pid61471 00:29:47.738 Removing: /var/run/dpdk/spdk_pid61586 00:29:47.738 Removing: /var/run/dpdk/spdk_pid61640 00:29:47.738 Removing: /var/run/dpdk/spdk_pid61664 00:29:47.738 Removing: /var/run/dpdk/spdk_pid61748 00:29:47.738 Removing: /var/run/dpdk/spdk_pid62392 00:29:47.738 Removing: /var/run/dpdk/spdk_pid62440 00:29:47.738 Removing: /var/run/dpdk/spdk_pid62934 00:29:47.738 Removing: /var/run/dpdk/spdk_pid63032 00:29:47.738 Removing: /var/run/dpdk/spdk_pid63152 00:29:47.738 Removing: /var/run/dpdk/spdk_pid63205 00:29:47.738 Removing: /var/run/dpdk/spdk_pid63232 00:29:47.738 Removing: /var/run/dpdk/spdk_pid63257 00:29:47.738 Removing: /var/run/dpdk/spdk_pid65153 00:29:47.738 Removing: /var/run/dpdk/spdk_pid65296 00:29:47.738 Removing: /var/run/dpdk/spdk_pid65305 00:29:47.738 Removing: /var/run/dpdk/spdk_pid65317 00:29:47.738 Removing: /var/run/dpdk/spdk_pid65362 00:29:47.738 Removing: /var/run/dpdk/spdk_pid65366 00:29:47.738 Removing: /var/run/dpdk/spdk_pid65378 00:29:47.738 Removing: /var/run/dpdk/spdk_pid65429 00:29:47.738 Removing: /var/run/dpdk/spdk_pid65433 00:29:47.738 Removing: /var/run/dpdk/spdk_pid65445 00:29:47.738 Removing: /var/run/dpdk/spdk_pid65490 00:29:47.738 Removing: /var/run/dpdk/spdk_pid65494 00:29:47.738 Removing: /var/run/dpdk/spdk_pid65506 00:29:47.738 Removing: /var/run/dpdk/spdk_pid66911 00:29:47.738 Removing: /var/run/dpdk/spdk_pid67021 00:29:47.738 Removing: /var/run/dpdk/spdk_pid68454 00:29:47.738 Removing: /var/run/dpdk/spdk_pid69825 00:29:47.738 Removing: /var/run/dpdk/spdk_pid69934 00:29:47.738 Removing: /var/run/dpdk/spdk_pid70038 00:29:47.998 Removing: /var/run/dpdk/spdk_pid70142 00:29:47.998 Removing: /var/run/dpdk/spdk_pid70271 00:29:47.998 Removing: /var/run/dpdk/spdk_pid70346 00:29:47.998 Removing: /var/run/dpdk/spdk_pid70499 00:29:47.998 Removing: /var/run/dpdk/spdk_pid70877 00:29:47.998 Removing: /var/run/dpdk/spdk_pid70919 00:29:47.998 Removing: /var/run/dpdk/spdk_pid71370 00:29:47.998 Removing: /var/run/dpdk/spdk_pid71557 00:29:47.998 Removing: /var/run/dpdk/spdk_pid71657 00:29:47.998 Removing: /var/run/dpdk/spdk_pid71767 00:29:47.998 Removing: /var/run/dpdk/spdk_pid71826 00:29:47.998 Removing: /var/run/dpdk/spdk_pid71851 00:29:47.998 Removing: /var/run/dpdk/spdk_pid72141 00:29:47.998 Removing: /var/run/dpdk/spdk_pid72207 00:29:47.998 Removing: /var/run/dpdk/spdk_pid72298 00:29:47.998 Removing: /var/run/dpdk/spdk_pid72725 00:29:47.998 Removing: /var/run/dpdk/spdk_pid72871 00:29:47.998 Removing: /var/run/dpdk/spdk_pid73687 00:29:47.998 Removing: /var/run/dpdk/spdk_pid73836 00:29:47.998 Removing: /var/run/dpdk/spdk_pid74033 00:29:47.998 Removing: /var/run/dpdk/spdk_pid74141 00:29:47.998 Removing: /var/run/dpdk/spdk_pid74466 00:29:47.998 Removing: /var/run/dpdk/spdk_pid74730 00:29:47.998 Removing: /var/run/dpdk/spdk_pid75085 00:29:47.998 Removing: /var/run/dpdk/spdk_pid75290 00:29:47.998 Removing: /var/run/dpdk/spdk_pid75442 00:29:47.998 Removing: /var/run/dpdk/spdk_pid75500 00:29:47.998 Removing: /var/run/dpdk/spdk_pid75654 00:29:47.998 Removing: /var/run/dpdk/spdk_pid75690 00:29:47.998 Removing: /var/run/dpdk/spdk_pid75748 00:29:47.998 Removing: /var/run/dpdk/spdk_pid75969 00:29:47.998 Removing: /var/run/dpdk/spdk_pid76207 00:29:47.998 Removing: /var/run/dpdk/spdk_pid76691 00:29:47.998 Removing: /var/run/dpdk/spdk_pid77173 00:29:47.998 Removing: /var/run/dpdk/spdk_pid77664 00:29:47.998 Removing: /var/run/dpdk/spdk_pid78211 00:29:47.998 Removing: /var/run/dpdk/spdk_pid78361 00:29:47.998 Removing: /var/run/dpdk/spdk_pid78448 00:29:47.998 Removing: /var/run/dpdk/spdk_pid79101 00:29:47.998 Removing: /var/run/dpdk/spdk_pid79170 00:29:47.998 Removing: /var/run/dpdk/spdk_pid79669 00:29:47.998 Removing: /var/run/dpdk/spdk_pid80056 00:29:47.998 Removing: /var/run/dpdk/spdk_pid80606 00:29:47.998 Removing: /var/run/dpdk/spdk_pid80728 00:29:47.998 Removing: /var/run/dpdk/spdk_pid80787 00:29:47.998 Removing: /var/run/dpdk/spdk_pid80850 00:29:47.998 Removing: /var/run/dpdk/spdk_pid80906 00:29:47.998 Removing: /var/run/dpdk/spdk_pid80971 00:29:47.998 Removing: /var/run/dpdk/spdk_pid81169 00:29:47.998 Removing: /var/run/dpdk/spdk_pid81258 00:29:47.998 Removing: /var/run/dpdk/spdk_pid81325 00:29:47.998 Removing: /var/run/dpdk/spdk_pid81392 00:29:47.998 Removing: /var/run/dpdk/spdk_pid81438 00:29:47.998 Removing: /var/run/dpdk/spdk_pid81510 00:29:48.268 Removing: /var/run/dpdk/spdk_pid81637 00:29:48.268 Clean 00:29:48.268 18:19:17 -- common/autotest_common.sh@1451 -- # return 0 00:29:48.268 18:19:17 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:29:48.268 18:19:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:48.268 18:19:17 -- common/autotest_common.sh@10 -- # set +x 00:29:48.269 18:19:17 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:29:48.269 18:19:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:48.269 18:19:17 -- common/autotest_common.sh@10 -- # set +x 00:29:48.269 18:19:17 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:48.269 18:19:17 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:48.269 18:19:17 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:48.269 18:19:17 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:29:48.269 18:19:17 -- spdk/autotest.sh@394 -- # hostname 00:29:48.269 18:19:17 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:48.528 geninfo: WARNING: invalid characters removed from testname! 00:30:15.083 18:19:42 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:16.021 18:19:45 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:18.558 18:19:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:20.469 18:19:49 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:23.008 18:19:51 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:24.914 18:19:54 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:26.822 18:19:56 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:26.822 18:19:56 -- spdk/autorun.sh@1 -- $ timing_finish 00:30:26.822 18:19:56 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:30:26.822 18:19:56 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:26.822 18:19:56 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:26.822 18:19:56 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:27.081 + [[ -n 5249 ]] 00:30:27.081 + sudo kill 5249 00:30:27.091 [Pipeline] } 00:30:27.107 [Pipeline] // timeout 00:30:27.112 [Pipeline] } 00:30:27.126 [Pipeline] // stage 00:30:27.131 [Pipeline] } 00:30:27.145 [Pipeline] // catchError 00:30:27.154 [Pipeline] stage 00:30:27.156 [Pipeline] { (Stop VM) 00:30:27.168 [Pipeline] sh 00:30:27.451 + vagrant halt 00:30:29.987 ==> default: Halting domain... 00:30:36.572 [Pipeline] sh 00:30:36.854 + vagrant destroy -f 00:30:39.387 ==> default: Removing domain... 00:30:39.970 [Pipeline] sh 00:30:40.260 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:30:40.304 [Pipeline] } 00:30:40.319 [Pipeline] // stage 00:30:40.323 [Pipeline] } 00:30:40.336 [Pipeline] // dir 00:30:40.341 [Pipeline] } 00:30:40.355 [Pipeline] // wrap 00:30:40.362 [Pipeline] } 00:30:40.374 [Pipeline] // catchError 00:30:40.382 [Pipeline] stage 00:30:40.385 [Pipeline] { (Epilogue) 00:30:40.398 [Pipeline] sh 00:30:40.682 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:45.970 [Pipeline] catchError 00:30:45.972 [Pipeline] { 00:30:45.985 [Pipeline] sh 00:30:46.270 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:46.529 Artifacts sizes are good 00:30:46.539 [Pipeline] } 00:30:46.554 [Pipeline] // catchError 00:30:46.566 [Pipeline] archiveArtifacts 00:30:46.573 Archiving artifacts 00:30:46.694 [Pipeline] cleanWs 00:30:46.706 [WS-CLEANUP] Deleting project workspace... 00:30:46.706 [WS-CLEANUP] Deferred wipeout is used... 00:30:46.735 [WS-CLEANUP] done 00:30:46.736 [Pipeline] } 00:30:46.751 [Pipeline] // stage 00:30:46.756 [Pipeline] } 00:30:46.770 [Pipeline] // node 00:30:46.775 [Pipeline] End of Pipeline 00:30:46.817 Finished: SUCCESS